From 498126a8ad6611848baa5cac4a516edea8be6e1f Mon Sep 17 00:00:00 2001 From: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Date: Tue, 14 Nov 2023 14:31:48 -0800 Subject: [PATCH 001/199] OpenAI and Bedrock Preview (#971) * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] * Update OpenAI testing infra to match bedrock (#939) * Add OpenAI sync chat completion instrumentation (#934) * Add openai sync instrumentation. * Remove commented code. * Test cleanup. * Add request/ response IDs. * Fixups. * Add conversation ID to message events. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add OpenAI sync embedding instrumentation (#938) * Add sync instrumentation for OpenAI embeddings. * Remove comments. * Clean up embedding event dictionary. * Update response_time to duration. * Linting fixes. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek * Instrument acreate's for open-ai (#935) * Instrument acreate's for open ai async * Remove duplicated vendor * Re-use expected & input payloads in tests * Attach ml_event to APM entity by default (#940) * Attach non InferenceEvents to APM entity * Validate both resource payloads * Add tests for non-inference events * Add OpenAI sync embedding instrumentation (#938) * Add sync instrumentation for OpenAI embeddings. * Remove comments. * Clean up embedding event dictionary. * Update response_time to duration. * Linting fixes. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek * Fixup: test names --------- Co-authored-by: Uma Annamalai Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add truncation for ML events. (#943) * Add 4096 char truncation for ML events. * Add max attr check. * Fixup. * Fix character length ml event test. * Ignore test_ml_events.py for Py2. * Cleanup custom event if checks. * Add import statement. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add framework metric for OpenAI. (#945) * Add framework metric for OpenAI. * [Mega-Linter] Apply linters fixes * Trigger tests * Fix missing version info. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add truncation support for ML events recorded outside txns. (#949) * Add ml tests for outside transaction. * Update validator. * Add ML flag to application code path for record_ml_event. * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Mock openai error responses (#950) * Add example tests and mock error responses * Set invalid api key in auth error test Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * OpenAI ErrorTrace attributes (#941) * Add openai sync instrumentation. * Remove commented code. * Initial openai error commit * Add example tests and mock error responses * Changes to attribute collection * Change error tests to match mock server * [Mega-Linter] Apply linters fixes * Trigger tests * Add dt_enabled decorator to error tests * Add embedded and async error tests * [Mega-Linter] Apply linters fixes * Trigger tests * Add http.statusCode to span before notice_error call * Report number of messages in error trace even if 0 * Revert notice_error and add _nr_message attr * Remove enabled_ml_settings as not needed * Add stats engine _nr_message test * [Mega-Linter] Apply linters fixes * Trigger tests * Revert black formatting in unicode/byte messages --------- Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek Co-authored-by: lrafeei Co-authored-by: hmstepanek * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Pin openai tests to below 1.0 (#962) * Pin openai below 1.0 * Fixup * Add openai feedback support (#942) * Add get_ai_message_ids & message id capturing * Add tests * Remove generator * Add tests for conversation id unset * Add error code to mocked responses * Remove bedrock tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add ingest source to openai events (#961) * Pin openai below 1.0 * Fixup * Add ingest_source to events * Remove duplicate test file * Handle 0.32.0.post1 version in tests (#963) --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Handle 0.32.0.post1 version in tests (#963) * Initial merge commit * Update moto * Test for Bedrock embeddings metrics * Add record_llm_feedback_event API (#964) * Implement record_ai_feedback API. * [Mega-Linter] Apply linters fixes * Change API name to record_ai_feedback_event. * Fix API naming. * Rename to record_llm_feedback_event and get_llm_message_ids. * [Mega-Linter] Apply linters fixes * Address review feedback. * Update test structure. * [Mega-Linter] Apply linters fixes * Bump tests. --------- Co-authored-by: umaannamalai * Bedrock Error Tracing (#966) * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Drop python 3.7 tests for Hypercorn (#954) * Fix pyenv installation for devcontainer (#936) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove duplicate kafka import hook (#956) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Initial bedrock error tracing commit * Handle 0.32.0.post1 version in tests (#963) * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Fix expected chat completion tests * Remove commented out code * Correct Bedrock metric name --------- Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: lrafeei Co-authored-by: hmstepanek Co-authored-by: Tim Pansino --- newrelic/agent.py | 6 + newrelic/api/ml_model.py | 49 + newrelic/api/time_trace.py | 43 +- newrelic/api/transaction.py | 11 +- newrelic/config.py | 15 + newrelic/core/application.py | 2 +- newrelic/core/attribute.py | 2 + newrelic/core/custom_event.py | 21 +- newrelic/core/otlp_utils.py | 47 +- newrelic/core/stats_engine.py | 5 + newrelic/hooks/external_botocore.py | 460 ++- newrelic/hooks/mlmodel_openai.py | 519 +++ tests/agent_features/conftest.py | 1 + .../agent_features/test_exception_messages.py | 93 +- tests/agent_features/test_ml_events.py | 189 +- .../test_record_llm_feedback_event.py | 95 + tests/external_boto3/conftest.py | 30 - .../_mock_external_bedrock_server.py | 3461 +++++++++++++++++ .../_test_bedrock_chat_completion.py | 317 ++ .../_test_bedrock_embeddings.py | 74 + tests/external_botocore/conftest.py | 151 +- .../test_bedrock_chat_completion.py | 233 ++ .../test_bedrock_embeddings.py | 159 + .../test_boto3_iam.py | 4 +- .../test_boto3_s3.py | 4 +- .../test_boto3_sns.py | 6 +- .../test_botocore_dynamodb.py | 6 +- tests/external_botocore/test_botocore_ec2.py | 4 +- tests/external_botocore/test_botocore_s3.py | 4 +- tests/external_botocore/test_botocore_sqs.py | 6 +- .../_mock_external_openai_server.py | 226 ++ tests/mlmodel_openai/conftest.py | 156 + tests/mlmodel_openai/test_chat_completion.py | 347 ++ .../test_chat_completion_error.py | 328 ++ tests/mlmodel_openai/test_embeddings.py | 143 + tests/mlmodel_openai/test_embeddings_error.py | 264 ++ .../test_get_llm_message_ids.py | 234 ++ .../validators/validate_ml_event_payload.py | 82 +- .../validators/validate_ml_events.py | 3 +- tox.ini | 14 +- 40 files changed, 7612 insertions(+), 202 deletions(-) create mode 100644 newrelic/hooks/mlmodel_openai.py create mode 100644 tests/agent_features/test_record_llm_feedback_event.py delete mode 100644 tests/external_boto3/conftest.py create mode 100644 tests/external_botocore/_mock_external_bedrock_server.py create mode 100644 tests/external_botocore/_test_bedrock_chat_completion.py create mode 100644 tests/external_botocore/_test_bedrock_embeddings.py create mode 100644 tests/external_botocore/test_bedrock_chat_completion.py create mode 100644 tests/external_botocore/test_bedrock_embeddings.py rename tests/{external_boto3 => external_botocore}/test_boto3_iam.py (95%) rename tests/{external_boto3 => external_botocore}/test_boto3_s3.py (97%) rename tests/{external_boto3 => external_botocore}/test_boto3_sns.py (94%) create mode 100644 tests/mlmodel_openai/_mock_external_openai_server.py create mode 100644 tests/mlmodel_openai/conftest.py create mode 100644 tests/mlmodel_openai/test_chat_completion.py create mode 100644 tests/mlmodel_openai/test_chat_completion_error.py create mode 100644 tests/mlmodel_openai/test_embeddings.py create mode 100644 tests/mlmodel_openai/test_embeddings_error.py create mode 100644 tests/mlmodel_openai/test_get_llm_message_ids.py diff --git a/newrelic/agent.py b/newrelic/agent.py index 2c7f0fb858..bc6cdbbd3a 100644 --- a/newrelic/agent.py +++ b/newrelic/agent.py @@ -153,6 +153,10 @@ def __asgi_application(*args, **kwargs): from newrelic.api.message_transaction import ( wrap_message_transaction as __wrap_message_transaction, ) +from newrelic.api.ml_model import get_llm_message_ids as __get_llm_message_ids +from newrelic.api.ml_model import ( + record_llm_feedback_event as __record_llm_feedback_event, +) from newrelic.api.ml_model import wrap_mlmodel as __wrap_mlmodel from newrelic.api.profile_trace import ProfileTraceWrapper as __ProfileTraceWrapper from newrelic.api.profile_trace import profile_trace as __profile_trace @@ -340,3 +344,5 @@ def __asgi_application(*args, **kwargs): insert_html_snippet = __wrap_api_call(__insert_html_snippet, "insert_html_snippet") verify_body_exists = __wrap_api_call(__verify_body_exists, "verify_body_exists") wrap_mlmodel = __wrap_api_call(__wrap_mlmodel, "wrap_mlmodel") +get_llm_message_ids = __wrap_api_call(__get_llm_message_ids, "get_llm_message_ids") +record_llm_feedback_event = __wrap_api_call(__record_llm_feedback_event, "record_llm_feedback_event") diff --git a/newrelic/api/ml_model.py b/newrelic/api/ml_model.py index edbcaf3406..d01042b359 100644 --- a/newrelic/api/ml_model.py +++ b/newrelic/api/ml_model.py @@ -13,7 +13,10 @@ # limitations under the License. import sys +import uuid +import warnings +from newrelic.api.transaction import current_transaction from newrelic.common.object_names import callable_name from newrelic.hooks.mlmodel_sklearn import _nr_instrument_model @@ -33,3 +36,49 @@ def wrap_mlmodel(model, name=None, version=None, feature_names=None, label_names model._nr_wrapped_label_names = label_names if metadata: model._nr_wrapped_metadata = metadata + + +def get_llm_message_ids(response_id=None): + transaction = current_transaction() + if response_id and transaction: + nr_message_ids = getattr(transaction, "_nr_message_ids", {}) + message_id_info = nr_message_ids.pop(response_id, ()) + + if not message_id_info: + warnings.warn("No message ids found for %s" % response_id) + return [] + + conversation_id, request_id, ids = message_id_info + + return [{"conversation_id": conversation_id, "request_id": request_id, "message_id": _id} for _id in ids] + warnings.warn("No message ids found. get_llm_message_ids must be called within the scope of a transaction.") + return [] + + +def record_llm_feedback_event( + message_id, rating, conversation_id=None, request_id=None, category=None, message=None, metadata=None +): + transaction = current_transaction() + if not transaction: + warnings.warn( + "No message feedback events will be recorded. record_llm_feedback_event must be called within the " + "scope of a transaction." + ) + return + + feedback_message_id = str(uuid.uuid4()) + metadata = metadata or {} + + feedback_message_event = { + "id": feedback_message_id, + "message_id": message_id, + "rating": rating, + "conversation_id": conversation_id or "", + "request_id": request_id or "", + "category": category or "", + "message": message or "", + "ingest_source": "Python", + } + feedback_message_event.update(metadata) + + transaction.record_ml_event("LlmFeedbackMessage", feedback_message_event) diff --git a/newrelic/api/time_trace.py b/newrelic/api/time_trace.py index 24be0e00f6..40ef225129 100644 --- a/newrelic/api/time_trace.py +++ b/newrelic/api/time_trace.py @@ -29,7 +29,6 @@ ) from newrelic.core.config import is_expected_error, should_ignore_error from newrelic.core.trace_cache import trace_cache - from newrelic.packages import six _logger = logging.getLogger(__name__) @@ -260,6 +259,11 @@ def _observe_exception(self, exc_info=None, ignore=None, expected=None, status_c module, name, fullnames, message_raw = parse_exc_info((exc, value, tb)) fullname = fullnames[0] + # In case message is in JSON format for OpenAI models + # this will result in a "cleaner" message format + if getattr(value, "_nr_message", None): + message_raw = value._nr_message + # Check to see if we need to strip the message before recording it. if settings.strip_exception_messages.enabled and fullname not in settings.strip_exception_messages.allowlist: @@ -422,23 +426,32 @@ def notice_error(self, error=None, attributes=None, expected=None, ignore=None, input_attributes = {} input_attributes.update(transaction._custom_params) input_attributes.update(attributes) - error_group_name_raw = settings.error_collector.error_group_callback(value, { - "traceback": tb, - "error.class": exc, - "error.message": message_raw, - "error.expected": is_expected, - "custom_params": input_attributes, - "transactionName": getattr(transaction, "name", None), - "response.status": getattr(transaction, "_response_code", None), - "request.method": getattr(transaction, "_request_method", None), - "request.uri": getattr(transaction, "_request_uri", None), - }) + error_group_name_raw = settings.error_collector.error_group_callback( + value, + { + "traceback": tb, + "error.class": exc, + "error.message": message_raw, + "error.expected": is_expected, + "custom_params": input_attributes, + "transactionName": getattr(transaction, "name", None), + "response.status": getattr(transaction, "_response_code", None), + "request.method": getattr(transaction, "_request_method", None), + "request.uri": getattr(transaction, "_request_uri", None), + }, + ) if error_group_name_raw: _, error_group_name = process_user_attribute("error.group.name", error_group_name_raw) if error_group_name is None or not isinstance(error_group_name, six.string_types): - raise ValueError("Invalid attribute value for error.group.name. Expected string, got: %s" % repr(error_group_name_raw)) + raise ValueError( + "Invalid attribute value for error.group.name. Expected string, got: %s" + % repr(error_group_name_raw) + ) except Exception: - _logger.error("Encountered error when calling error group callback:\n%s", "".join(traceback.format_exception(*sys.exc_info()))) + _logger.error( + "Encountered error when calling error group callback:\n%s", + "".join(traceback.format_exception(*sys.exc_info())), + ) error_group_name = None transaction._create_error_node( @@ -595,13 +608,11 @@ def update_async_exclusive_time(self, min_child_start_time, exclusive_duration): def process_child(self, node, is_async): self.children.append(node) if is_async: - # record the lowest start time self.min_child_start_time = min(self.min_child_start_time, node.start_time) # if there are no children running, finalize exclusive time if self.child_count == len(self.children): - exclusive_duration = node.end_time - self.min_child_start_time self.update_async_exclusive_time(self.min_child_start_time, exclusive_duration) diff --git a/newrelic/api/transaction.py b/newrelic/api/transaction.py index 988b56be6e..d6e960d5aa 100644 --- a/newrelic/api/transaction.py +++ b/newrelic/api/transaction.py @@ -191,6 +191,7 @@ def __init__(self, application, enabled=None, source=None): self._frameworks = set() self._message_brokers = set() self._dispatchers = set() + self._ml_models = set() self._frozen_path = None @@ -559,6 +560,10 @@ def __exit__(self, exc, value, tb): for dispatcher, version in self._dispatchers: self.record_custom_metric("Python/Dispatcher/%s/%s" % (dispatcher, version), 1) + if self._ml_models: + for ml_model, version in self._ml_models: + self.record_custom_metric("Python/ML/%s/%s" % (ml_model, version), 1) + if self._settings.distributed_tracing.enabled: # Sampled and priority need to be computed at the end of the # transaction when distributed tracing or span events are enabled. @@ -1648,7 +1653,7 @@ def record_ml_event(self, event_type, params): if not settings.ml_insights_events.enabled: return - event = create_custom_event(event_type, params) + event = create_custom_event(event_type, params, is_ml_event=True) if event: self._ml_events.add(event, priority=self.priority) @@ -1755,6 +1760,10 @@ def add_dispatcher_info(self, name, version=None): if name: self._dispatchers.add((name, version)) + def add_ml_model_info(self, name, version=None): + if name: + self._ml_models.add((name, version)) + def dump(self, file): """Dumps details about the transaction to the file object.""" diff --git a/newrelic/config.py b/newrelic/config.py index 5c4c52464f..6fe19705f2 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -2037,6 +2037,21 @@ def _process_trace_cache_import_hooks(): def _process_module_builtin_defaults(): + _process_module_definition( + "openai.api_resources.embedding", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_api_resources_embedding", + ) + _process_module_definition( + "openai.api_resources.chat_completion", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_api_resources_chat_completion", + ) + _process_module_definition( + "openai.util", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_util", + ) _process_module_definition( "asyncio.base_events", "newrelic.hooks.coroutines_asyncio", diff --git a/newrelic/core/application.py b/newrelic/core/application.py index 82cdf8a9a0..c681bc3f01 100644 --- a/newrelic/core/application.py +++ b/newrelic/core/application.py @@ -932,7 +932,7 @@ def record_ml_event(self, event_type, params): if settings is None or not settings.ml_insights_events.enabled: return - event = create_custom_event(event_type, params) + event = create_custom_event(event_type, params, is_ml_event=True) if event: with self._stats_custom_lock: diff --git a/newrelic/core/attribute.py b/newrelic/core/attribute.py index 10ae8e4597..a872b4b1b0 100644 --- a/newrelic/core/attribute.py +++ b/newrelic/core/attribute.py @@ -89,6 +89,8 @@ MAX_NUM_USER_ATTRIBUTES = 128 MAX_ATTRIBUTE_LENGTH = 255 +MAX_NUM_ML_USER_ATTRIBUTES = 64 +MAX_ML_ATTRIBUTE_LENGTH = 4095 MAX_64_BIT_INT = 2**63 - 1 MAX_LOG_MESSAGE_LENGTH = 32768 diff --git a/newrelic/core/custom_event.py b/newrelic/core/custom_event.py index 206fb84e68..b86dc25998 100644 --- a/newrelic/core/custom_event.py +++ b/newrelic/core/custom_event.py @@ -18,7 +18,7 @@ from newrelic.core.attribute import (check_name_is_string, check_name_length, process_user_attribute, NameIsNotStringException, NameTooLongException, - MAX_NUM_USER_ATTRIBUTES) + MAX_NUM_USER_ATTRIBUTES, MAX_ML_ATTRIBUTE_LENGTH, MAX_NUM_ML_USER_ATTRIBUTES, MAX_ATTRIBUTE_LENGTH) _logger = logging.getLogger(__name__) @@ -72,7 +72,8 @@ def process_event_type(name): else: return name -def create_custom_event(event_type, params): + +def create_custom_event(event_type, params, is_ml_event=False): """Creates a valid custom event. Ensures that the custom event has a valid name, and also checks @@ -83,6 +84,8 @@ def create_custom_event(event_type, params): Args: event_type (str): The type (name) of the custom event. params (dict): Attributes to add to the event. + is_ml_event (bool): Boolean indicating whether create_custom_event was called from + record_ml_event for truncation purposes Returns: Custom event (list of 2 dicts), if successful. @@ -99,12 +102,18 @@ def create_custom_event(event_type, params): try: for k, v in params.items(): - key, value = process_user_attribute(k, v) + if is_ml_event: + max_length = MAX_ML_ATTRIBUTE_LENGTH + max_num_attrs = MAX_NUM_ML_USER_ATTRIBUTES + else: + max_length = MAX_ATTRIBUTE_LENGTH + max_num_attrs = MAX_NUM_USER_ATTRIBUTES + key, value = process_user_attribute(k, v, max_length=max_length) if key: - if len(attributes) >= MAX_NUM_USER_ATTRIBUTES: + if len(attributes) >= max_num_attrs: _logger.debug('Maximum number of attributes already ' - 'added to event %r. Dropping attribute: %r=%r', - name, key, value) + 'added to event %r. Dropping attribute: %r=%r', + name, key, value) else: attributes[key] = value except Exception: diff --git a/newrelic/core/otlp_utils.py b/newrelic/core/otlp_utils.py index e78a63603e..0719fed33c 100644 --- a/newrelic/core/otlp_utils.py +++ b/newrelic/core/otlp_utils.py @@ -21,6 +21,7 @@ import logging +from newrelic.api.time_trace import get_service_linking_metadata from newrelic.common.encoding_utils import json_encode from newrelic.core.config import global_settings from newrelic.core.stats_engine import CountStats, TimeStats @@ -124,8 +125,11 @@ def create_key_values_from_iterable(iterable): ) -def create_resource(attributes=None): +def create_resource(attributes=None, attach_apm_entity=True): attributes = attributes or {"instrumentation.provider": "newrelic-opentelemetry-python-ml"} + if attach_apm_entity: + metadata = get_service_linking_metadata() + attributes.update(metadata) return Resource(attributes=create_key_values_from_iterable(attributes)) @@ -203,7 +207,7 @@ def stats_to_otlp_metrics(metric_data, start_time, end_time): def encode_metric_data(metric_data, start_time, end_time, resource=None, scope=None): - resource = resource or create_resource() + resource = resource or create_resource(attach_apm_entity=False) return MetricsData( resource_metrics=[ ResourceMetrics( @@ -220,24 +224,45 @@ def encode_metric_data(metric_data, start_time, end_time, resource=None, scope=N def encode_ml_event_data(custom_event_data, agent_run_id): - resource = create_resource() - ml_events = [] + # An InferenceEvent is attached to a separate ML Model entity instead + # of the APM entity. + ml_inference_events = [] + ml_apm_events = [] for event in custom_event_data: event_info, event_attrs = event + event_type = event_info["type"] event_attrs.update( { "real_agent_id": agent_run_id, "event.domain": "newrelic.ml_events", - "event.name": event_info["type"], + "event.name": event_type, } ) ml_attrs = create_key_values_from_iterable(event_attrs) unix_nano_timestamp = event_info["timestamp"] * 1e6 - ml_events.append( - { - "time_unix_nano": int(unix_nano_timestamp), - "attributes": ml_attrs, - } + if event_type == "InferenceEvent": + ml_inference_events.append( + { + "time_unix_nano": int(unix_nano_timestamp), + "attributes": ml_attrs, + } + ) + else: + ml_apm_events.append( + { + "time_unix_nano": int(unix_nano_timestamp), + "attributes": ml_attrs, + } + ) + + resource_logs = [] + if ml_inference_events: + inference_resource = create_resource(attach_apm_entity=False) + resource_logs.append( + ResourceLogs(resource=inference_resource, scope_logs=[ScopeLogs(log_records=ml_inference_events)]) ) + if ml_apm_events: + apm_resource = create_resource() + resource_logs.append(ResourceLogs(resource=apm_resource, scope_logs=[ScopeLogs(log_records=ml_apm_events)])) - return LogsData(resource_logs=[ResourceLogs(resource=resource, scope_logs=[ScopeLogs(log_records=ml_events)])]) + return LogsData(resource_logs=resource_logs) diff --git a/newrelic/core/stats_engine.py b/newrelic/core/stats_engine.py index ebebe7dbe1..e5c39a2df2 100644 --- a/newrelic/core/stats_engine.py +++ b/newrelic/core/stats_engine.py @@ -724,6 +724,11 @@ def notice_error(self, error=None, attributes=None, expected=None, ignore=None, module, name, fullnames, message_raw = parse_exc_info(error) fullname = fullnames[0] + # In the case case of JSON formatting for OpenAI models + # this will result in a "cleaner" message format + if getattr(value, "_nr_message", None): + message_raw = value._nr_message + # Check to see if we need to strip the message before recording it. if settings.strip_exception_messages.enabled and fullname not in settings.strip_exception_messages.allowlist: diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 7d49fbd031..c075f0874d 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -12,15 +12,34 @@ # See the License for the specific language governing permissions and # limitations under the License. -from newrelic.api.message_trace import message_trace +import json +import logging +import uuid +from io import BytesIO + +from botocore.response import StreamingBody + from newrelic.api.datastore_trace import datastore_trace from newrelic.api.external_trace import ExternalTrace -from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.api.function_trace import FunctionTrace +from newrelic.api.message_trace import message_trace +from newrelic.api.time_trace import get_trace_linking_metadata +from newrelic.api.transaction import current_transaction +from newrelic.common.object_names import callable_name +from newrelic.common.object_wrapper import function_wrapper, wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version +from newrelic.core.config import global_settings + +BOTOCORE_VERSION = get_package_version("botocore") + + +_logger = logging.getLogger(__name__) +UNSUPPORTED_MODEL_WARNING_SENT = False def extract_sqs(*args, **kwargs): - queue_value = kwargs.get('QueueUrl', 'Unknown') - return queue_value.rsplit('/', 1)[-1] + queue_value = kwargs.get("QueueUrl", "Unknown") + return queue_value.rsplit("/", 1)[-1] def extract(argument_names, default=None): @@ -40,43 +59,399 @@ def extractor_string(*args, **kwargs): return extractor_list +def bedrock_error_attributes(exception, request_args, client, extractor): + response = getattr(exception, "response", None) + if not response: + return {} + + request_body = request_args.get("body", "") + error_attributes = extractor(request_body)[1] + + error_attributes.update( + { + "request_id": response.get("ResponseMetadata", {}).get("RequestId", ""), + "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], + "request.model": request_args.get("modelId", ""), + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": response.get("ResponseMetadata", "").get("HTTPStatusCode", ""), + "error.message": response.get("Error", "").get("Message", ""), + "error.code": response.get("Error", "").get("Code", ""), + } + ) + return error_attributes + + +def create_chat_completion_message_event( + transaction, + app_name, + message_list, + chat_completion_id, + span_id, + trace_id, + request_model, + request_id, + conversation_id, + response_id="", +): + if not transaction: + return + + for index, message in enumerate(message_list): + if response_id: + id_ = "%s-%d" % (response_id, index) # Response ID was set, append message index to it. + else: + id_ = str(uuid.uuid4()) # No response IDs, use random UUID + + chat_completion_message_dict = { + "id": id_, + "appName": app_name, + "conversation_id": conversation_id, + "request_id": request_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "content": message.get("content", ""), + "role": message.get("role"), + "completion_id": chat_completion_id, + "sequence": index, + "response.model": request_model, + "vendor": "bedrock", + "ingest_source": "Python", + } + transaction.record_ml_event("LlmChatCompletionMessage", chat_completion_message_dict) + + +def extract_bedrock_titan_text_model(request_body, response_body=None): + request_body = json.loads(request_body) + if response_body: + response_body = json.loads(response_body) + + request_config = request_body.get("textGenerationConfig", {}) + + chat_completion_summary_dict = { + "request.max_tokens": request_config.get("maxTokenCount", ""), + "request.temperature": request_config.get("temperature", ""), + } + + if response_body: + input_tokens = response_body["inputTextTokenCount"] + completion_tokens = sum(result["tokenCount"] for result in response_body.get("results", [])) + total_tokens = input_tokens + completion_tokens + + message_list = [{"role": "user", "content": request_body.get("inputText", "")}] + message_list.extend( + {"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", []) + ) + + chat_completion_summary_dict.update( + { + "response.choices.finish_reason": response_body["results"][0]["completionReason"], + "response.usage.completion_tokens": completion_tokens, + "response.usage.prompt_tokens": input_tokens, + "response.usage.total_tokens": total_tokens, + "response.number_of_messages": len(message_list), + } + ) + else: + message_list = [] + + return message_list, chat_completion_summary_dict + + +def extract_bedrock_titan_embedding_model(request_body, response_body=None): + if not response_body: + return [], {} # No extracted information necessary for embedding + + request_body = json.loads(request_body) + response_body = json.loads(response_body) + + input_tokens = response_body.get("inputTextTokenCount", None) + + embedding_dict = { + "input": request_body.get("inputText", ""), + "response.usage.prompt_tokens": input_tokens, + "response.usage.total_tokens": input_tokens, + } + return [], embedding_dict + + +def extract_bedrock_ai21_j2_model(request_body, response_body=None): + request_body = json.loads(request_body) + if response_body: + response_body = json.loads(response_body) + + chat_completion_summary_dict = { + "request.max_tokens": request_body.get("maxTokens", ""), + "request.temperature": request_body.get("temperature", ""), + } + + if response_body: + message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + message_list.extend( + {"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", []) + ) + + chat_completion_summary_dict.update( + { + "response.choices.finish_reason": response_body["completions"][0]["finishReason"]["reason"], + "response.number_of_messages": len(message_list), + "response_id": str(response_body.get("id", "")), + } + ) + else: + message_list = [] + + return message_list, chat_completion_summary_dict + + +def extract_bedrock_claude_model(request_body, response_body=None): + request_body = json.loads(request_body) + if response_body: + response_body = json.loads(response_body) + + chat_completion_summary_dict = { + "request.max_tokens": request_body.get("max_tokens_to_sample", ""), + "request.temperature": request_body.get("temperature", ""), + } + + if response_body: + message_list = [ + {"role": "user", "content": request_body.get("prompt", "")}, + {"role": "assistant", "content": response_body.get("completion", "")}, + ] + + chat_completion_summary_dict.update( + { + "response.choices.finish_reason": response_body.get("stop_reason", ""), + "response.number_of_messages": len(message_list), + } + ) + else: + message_list = [] + + return message_list, chat_completion_summary_dict + + +def extract_bedrock_cohere_model(request_body, response_body=None): + request_body = json.loads(request_body) + if response_body: + response_body = json.loads(response_body) + + chat_completion_summary_dict = { + "request.max_tokens": request_body.get("max_tokens", ""), + "request.temperature": request_body.get("temperature", ""), + } + + if response_body: + message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + message_list.extend( + {"role": "assistant", "content": result["text"]} for result in response_body.get("generations", []) + ) + + chat_completion_summary_dict.update( + { + "request.max_tokens": request_body.get("max_tokens", ""), + "request.temperature": request_body.get("temperature", ""), + "response.choices.finish_reason": response_body["generations"][0]["finish_reason"], + "response.number_of_messages": len(message_list), + "response_id": str(response_body.get("id", "")), + } + ) + else: + message_list = [] + + return message_list, chat_completion_summary_dict + + +MODEL_EXTRACTORS = [ # Order is important here, avoiding dictionaries + ("amazon.titan-embed", extract_bedrock_titan_embedding_model), + ("amazon.titan", extract_bedrock_titan_text_model), + ("ai21.j2", extract_bedrock_ai21_j2_model), + ("cohere", extract_bedrock_cohere_model), + ("anthropic.claude", extract_bedrock_claude_model), +] + + +@function_wrapper +def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): + # Wrapped function only takes keyword arguments, no need for binding + + transaction = current_transaction() + + if not transaction: + return wrapped(*args, **kwargs) + + transaction.add_ml_model_info("Bedrock", BOTOCORE_VERSION) + + # Read and replace request file stream bodies + request_body = kwargs["body"] + if hasattr(request_body, "read"): + request_body = request_body.read() + kwargs["body"] = request_body + + # Determine model to be used with extractor + model = kwargs.get("modelId") + if not model: + return wrapped(*args, **kwargs) + + # Determine extractor by model type + for extractor_name, extractor in MODEL_EXTRACTORS: + if model.startswith(extractor_name): + break + else: + # Model was not found in extractor list + global UNSUPPORTED_MODEL_WARNING_SENT + if not UNSUPPORTED_MODEL_WARNING_SENT: + # Only send warning once to avoid spam + _logger.warning( + "Unsupported Amazon Bedrock model in use (%s). Upgrade to a newer version of the agent, and contact New Relic support if the issue persists.", + model, + ) + UNSUPPORTED_MODEL_WARNING_SENT = True + + extractor = lambda *args: ([], {}) # Empty extractor that returns nothing + + ft_name = callable_name(wrapped) + with FunctionTrace(ft_name) as ft: + try: + response = wrapped(*args, **kwargs) + except Exception as exc: + try: + error_attributes = extractor(request_body) + error_attributes = bedrock_error_attributes(exc, kwargs, instance, extractor) + ft.notice_error( + attributes=error_attributes, + ) + finally: + raise + + if not response: + return response + + # Read and replace response streaming bodies + response_body = response["body"].read() + response["body"] = StreamingBody(BytesIO(response_body), len(response_body)) + response_headers = response["ResponseMetadata"]["HTTPHeaders"] + + if model.startswith("amazon.titan-embed"): # Only available embedding models + handle_embedding_event( + instance, transaction, extractor, model, response_body, response_headers, request_body, ft.duration + ) + else: + handle_chat_completion_event( + instance, transaction, extractor, model, response_body, response_headers, request_body, ft.duration + ) + + return response + + +def handle_embedding_event( + client, transaction, extractor, model, response_body, response_headers, request_body, duration +): + embedding_id = str(uuid.uuid4()) + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + request_id = response_headers.get("x-amzn-requestid", "") + settings = transaction.settings if transaction.settings is not None else global_settings() + + _, embedding_dict = extractor(request_body, response_body) + + embedding_dict.update( + { + "vendor": "bedrock", + "ingest_source": "Python", + "id": embedding_id, + "appName": settings.app_name, + "span_id": span_id, + "trace_id": trace_id, + "request_id": request_id, + "transaction_id": transaction._transaction_id, + "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], + "duration": duration, + "request.model": model, + "response.model": model, + } + ) + + transaction.record_ml_event("LlmEmbedding", embedding_dict) + + +def handle_chat_completion_event( + client, transaction, extractor, model, response_body, response_headers, request_body, duration +): + custom_attrs_dict = transaction._custom_params + conversation_id = custom_attrs_dict.get("conversation_id", "") + + chat_completion_id = str(uuid.uuid4()) + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + request_id = response_headers.get("x-amzn-requestid", "") + settings = transaction.settings if transaction.settings is not None else global_settings() + + message_list, chat_completion_summary_dict = extractor(request_body, response_body) + response_id = chat_completion_summary_dict.get("response_id", "") + chat_completion_summary_dict.update( + { + "vendor": "bedrock", + "ingest_source": "Python", + "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], + "id": chat_completion_id, + "appName": settings.app_name, + "conversation_id": conversation_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "request_id": request_id, + "duration": duration, + "request.model": model, + "response.model": model, # Duplicate data required by the UI + } + ) + + transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + + create_chat_completion_message_event( + transaction=transaction, + app_name=settings.app_name, + message_list=message_list, + chat_completion_id=chat_completion_id, + span_id=span_id, + trace_id=trace_id, + request_model=model, + request_id=request_id, + conversation_id=conversation_id, + response_id=response_id, + ) + + CUSTOM_TRACE_POINTS = { - ('sns', 'publish'): message_trace( - 'SNS', 'Produce', 'Topic', - extract(('TopicArn', 'TargetArn'), 'PhoneNumber')), - ('dynamodb', 'put_item'): datastore_trace( - 'DynamoDB', extract('TableName'), 'put_item'), - ('dynamodb', 'get_item'): datastore_trace( - 'DynamoDB', extract('TableName'), 'get_item'), - ('dynamodb', 'update_item'): datastore_trace( - 'DynamoDB', extract('TableName'), 'update_item'), - ('dynamodb', 'delete_item'): datastore_trace( - 'DynamoDB', extract('TableName'), 'delete_item'), - ('dynamodb', 'create_table'): datastore_trace( - 'DynamoDB', extract('TableName'), 'create_table'), - ('dynamodb', 'delete_table'): datastore_trace( - 'DynamoDB', extract('TableName'), 'delete_table'), - ('dynamodb', 'query'): datastore_trace( - 'DynamoDB', extract('TableName'), 'query'), - ('dynamodb', 'scan'): datastore_trace( - 'DynamoDB', extract('TableName'), 'scan'), - ('sqs', 'send_message'): message_trace( - 'SQS', 'Produce', 'Queue', extract_sqs), - ('sqs', 'send_message_batch'): message_trace( - 'SQS', 'Produce', 'Queue', extract_sqs), - ('sqs', 'receive_message'): message_trace( - 'SQS', 'Consume', 'Queue', extract_sqs), + ("sns", "publish"): message_trace("SNS", "Produce", "Topic", extract(("TopicArn", "TargetArn"), "PhoneNumber")), + ("dynamodb", "put_item"): datastore_trace("DynamoDB", extract("TableName"), "put_item"), + ("dynamodb", "get_item"): datastore_trace("DynamoDB", extract("TableName"), "get_item"), + ("dynamodb", "update_item"): datastore_trace("DynamoDB", extract("TableName"), "update_item"), + ("dynamodb", "delete_item"): datastore_trace("DynamoDB", extract("TableName"), "delete_item"), + ("dynamodb", "create_table"): datastore_trace("DynamoDB", extract("TableName"), "create_table"), + ("dynamodb", "delete_table"): datastore_trace("DynamoDB", extract("TableName"), "delete_table"), + ("dynamodb", "query"): datastore_trace("DynamoDB", extract("TableName"), "query"), + ("dynamodb", "scan"): datastore_trace("DynamoDB", extract("TableName"), "scan"), + ("sqs", "send_message"): message_trace("SQS", "Produce", "Queue", extract_sqs), + ("sqs", "send_message_batch"): message_trace("SQS", "Produce", "Queue", extract_sqs), + ("sqs", "receive_message"): message_trace("SQS", "Consume", "Queue", extract_sqs), + ("bedrock-runtime", "invoke_model"): wrap_bedrock_runtime_invoke_model, } -def bind__create_api_method(py_operation_name, operation_name, service_model, - *args, **kwargs): +def bind__create_api_method(py_operation_name, operation_name, service_model, *args, **kwargs): return (py_operation_name, service_model) def _nr_clientcreator__create_api_method_(wrapped, instance, args, kwargs): - (py_operation_name, service_model) = \ - bind__create_api_method(*args, **kwargs) + (py_operation_name, service_model) = bind__create_api_method(*args, **kwargs) service_name = service_model.service_name.lower() tracer = CUSTOM_TRACE_POINTS.get((service_name, py_operation_name)) @@ -95,30 +470,27 @@ def _bind_make_request_params(operation_model, request_dict, *args, **kwargs): def _nr_endpoint_make_request_(wrapped, instance, args, kwargs): operation_model, request_dict = _bind_make_request_params(*args, **kwargs) - url = request_dict.get('url', '') - method = request_dict.get('method', None) - - with ExternalTrace(library='botocore', url=url, method=method, source=wrapped) as trace: + url = request_dict.get("url", "") + method = request_dict.get("method", None) + with ExternalTrace(library="botocore", url=url, method=method, source=wrapped) as trace: try: - trace._add_agent_attribute('aws.operation', operation_model.name) + trace._add_agent_attribute("aws.operation", operation_model.name) except: pass result = wrapped(*args, **kwargs) try: - request_id = result[1]['ResponseMetadata']['RequestId'] - trace._add_agent_attribute('aws.requestId', request_id) + request_id = result[1]["ResponseMetadata"]["RequestId"] + trace._add_agent_attribute("aws.requestId", request_id) except: pass return result def instrument_botocore_endpoint(module): - wrap_function_wrapper(module, 'Endpoint.make_request', - _nr_endpoint_make_request_) + wrap_function_wrapper(module, "Endpoint.make_request", _nr_endpoint_make_request_) def instrument_botocore_client(module): - wrap_function_wrapper(module, 'ClientCreator._create_api_method', - _nr_clientcreator__create_api_method_) + wrap_function_wrapper(module, "ClientCreator._create_api_method", _nr_clientcreator__create_api_method_) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py new file mode 100644 index 0000000000..a51d8aae87 --- /dev/null +++ b/newrelic/hooks/mlmodel_openai.py @@ -0,0 +1,519 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import uuid + +import openai + +from newrelic.api.function_trace import FunctionTrace +from newrelic.api.time_trace import get_trace_linking_metadata +from newrelic.api.transaction import current_transaction +from newrelic.common.object_names import callable_name +from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version +from newrelic.core.config import global_settings + +OPENAI_VERSION = get_package_version("openai") + + +def openai_error_attributes(exception, request_args): + api_key = getattr(openai, "api_key", None) + api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" + number_of_messages = len(request_args.get("messages", [])) + + error_attributes = { + "api_key_last_four_digits": api_key_last_four_digits, + "request.model": request_args.get("model") or request_args.get("engine") or "", + "request.temperature": request_args.get("temperature", ""), + "request.max_tokens": request_args.get("max_tokens", ""), + "vendor": "openAI", + "ingest_source": "Python", + "response.organization": getattr(exception, "organization", ""), + "response.number_of_messages": number_of_messages, + "http.statusCode": getattr(exception, "http_status", ""), + "error.message": getattr(exception, "_message", ""), + "error.code": getattr(getattr(exception, "error", ""), "code", ""), + "error.param": getattr(exception, "param", ""), + } + return error_attributes + + +def wrap_embedding_create(wrapped, instance, args, kwargs): + transaction = current_transaction() + if not transaction: + return wrapped(*args, **kwargs) + + transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + + ft_name = callable_name(wrapped) + with FunctionTrace(ft_name) as ft: + try: + response = wrapped(*args, **kwargs) + except Exception as exc: + error_attributes = openai_error_attributes(exc, kwargs) + exc._nr_message = error_attributes.pop("error.message") + ft.notice_error( + attributes=error_attributes, + ) + raise + + if not response: + return response + + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + embedding_id = str(uuid.uuid4()) + + response_headers = getattr(response, "_nr_response_headers", None) + request_id = response_headers.get("x-request-id", "") if response_headers else "" + response_model = response.get("model", "") + response_usage = response.get("usage", {}) + + settings = transaction.settings if transaction.settings is not None else global_settings() + + embedding_dict = { + "id": embedding_id, + "appName": settings.app_name, + "span_id": span_id, + "trace_id": trace_id, + "request_id": request_id, + "transaction_id": transaction._transaction_id, + "input": kwargs.get("input", ""), + "api_key_last_four_digits": f"sk-{response.api_key[-4:]}", + "duration": ft.duration, + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "response.model": response_model, + "response.organization": response.organization, + "response.api_type": response.api_type, + "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", + "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", + "response.headers.llmVersion": response_headers.get("openai-version", ""), + "response.headers.ratelimitLimitRequests": check_rate_limit_header( + response_headers, "x-ratelimit-limit-requests", True + ), + "response.headers.ratelimitLimitTokens": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens", True + ), + "response.headers.ratelimitResetTokens": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens", False + ), + "response.headers.ratelimitResetRequests": check_rate_limit_header( + response_headers, "x-ratelimit-reset-requests", False + ), + "response.headers.ratelimitRemainingTokens": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens", True + ), + "response.headers.ratelimitRemainingRequests": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-requests", True + ), + "vendor": "openAI", + "ingest_source": "Python", + } + + transaction.record_ml_event("LlmEmbedding", embedding_dict) + return response + + +def wrap_chat_completion_create(wrapped, instance, args, kwargs): + transaction = current_transaction() + + if not transaction: + return wrapped(*args, **kwargs) + + transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + + ft_name = callable_name(wrapped) + with FunctionTrace(ft_name) as ft: + try: + response = wrapped(*args, **kwargs) + except Exception as exc: + error_attributes = openai_error_attributes(exc, kwargs) + exc._nr_message = error_attributes.pop("error.message") + ft.notice_error( + attributes=error_attributes, + ) + raise + + if not response: + return response + + custom_attrs_dict = transaction._custom_params + conversation_id = custom_attrs_dict.get("conversation_id", "") + + chat_completion_id = str(uuid.uuid4()) + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + response_headers = getattr(response, "_nr_response_headers", None) + response_model = response.get("model", "") + settings = transaction.settings if transaction.settings is not None else global_settings() + response_id = response.get("id") + request_id = response_headers.get("x-request-id", "") + + api_key = getattr(response, "api_key", None) + response_usage = response.get("usage", {}) + + messages = kwargs.get("messages", []) + choices = response.get("choices", []) + + chat_completion_summary_dict = { + "id": chat_completion_id, + "appName": settings.app_name, + "conversation_id": conversation_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "request_id": request_id, + "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", + "duration": ft.duration, + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "response.model": response_model, + "response.organization": getattr(response, "organization", ""), + "response.usage.completion_tokens": response_usage.get("completion_tokens", "") if any(response_usage) else "", + "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", + "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", + "request.temperature": kwargs.get("temperature", ""), + "request.max_tokens": kwargs.get("max_tokens", ""), + "response.choices.finish_reason": choices[0].finish_reason if choices else "", + "response.api_type": getattr(response, "api_type", ""), + "response.headers.llmVersion": response_headers.get("openai-version", ""), + "response.headers.ratelimitLimitRequests": check_rate_limit_header( + response_headers, "x-ratelimit-limit-requests", True + ), + "response.headers.ratelimitLimitTokens": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens", True + ), + "response.headers.ratelimitResetTokens": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens", False + ), + "response.headers.ratelimitResetRequests": check_rate_limit_header( + response_headers, "x-ratelimit-reset-requests", False + ), + "response.headers.ratelimitRemainingTokens": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens", True + ), + "response.headers.ratelimitRemainingRequests": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-requests", True + ), + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": len(messages) + len(choices), + } + + transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + message_list = list(messages) + if choices: + message_list.extend([choices[0].message]) + + message_ids = create_chat_completion_message_event( + transaction, + settings.app_name, + message_list, + chat_completion_id, + span_id, + trace_id, + response_model, + response_id, + request_id, + conversation_id, + ) + + # Cache message ids on transaction for retrieval after open ai call completion. + if not hasattr(transaction, "_nr_message_ids"): + transaction._nr_message_ids = {} + transaction._nr_message_ids[response_id] = message_ids + + return response + + +def check_rate_limit_header(response_headers, header_name, is_int): + if not response_headers: + return "" + + if header_name in response_headers: + header_value = response_headers.get(header_name) + if is_int: + try: + header_value = int(header_value) + except Exception: + pass + return header_value + else: + return "" + + +def create_chat_completion_message_event( + transaction, + app_name, + message_list, + chat_completion_id, + span_id, + trace_id, + response_model, + response_id, + request_id, + conversation_id, +): + message_ids = [] + for index, message in enumerate(message_list): + message_id = "%s-%s" % (response_id, index) + message_ids.append(message_id) + chat_completion_message_dict = { + "id": message_id, + "appName": app_name, + "conversation_id": conversation_id, + "request_id": request_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "content": message.get("content", ""), + "role": message.get("role", ""), + "completion_id": chat_completion_id, + "sequence": index, + "response.model": response_model, + "vendor": "openAI", + "ingest_source": "Python", + } + transaction.record_ml_event("LlmChatCompletionMessage", chat_completion_message_dict) + return (conversation_id, request_id, message_ids) + + +async def wrap_embedding_acreate(wrapped, instance, args, kwargs): + transaction = current_transaction() + if not transaction: + return await wrapped(*args, **kwargs) + + transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + + ft_name = callable_name(wrapped) + with FunctionTrace(ft_name) as ft: + try: + response = await wrapped(*args, **kwargs) + except Exception as exc: + error_attributes = openai_error_attributes(exc, kwargs) + exc._nr_message = error_attributes.pop("error.message") + ft.notice_error( + attributes=error_attributes, + ) + raise + + if not response: + return response + + embedding_id = str(uuid.uuid4()) + response_headers = getattr(response, "_nr_response_headers", None) + + settings = transaction.settings if transaction.settings is not None else global_settings() + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + api_key = getattr(response, "api_key", None) + usage = response.get("usage") + total_tokens = "" + prompt_tokens = "" + if usage: + total_tokens = usage.get("total_tokens", "") + prompt_tokens = usage.get("prompt_tokens", "") + + embedding_dict = { + "id": embedding_id, + "duration": ft.duration, + "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", + "request_id": response_headers.get("x-request-id", ""), + "input": kwargs.get("input", ""), + "response.api_type": getattr(response, "api_type", ""), + "response.organization": getattr(response, "organization", ""), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "response.model": response.get("model", ""), + "appName": settings.app_name, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "span_id": span_id, + "response.usage.total_tokens": total_tokens, + "response.usage.prompt_tokens": prompt_tokens, + "response.headers.llmVersion": response_headers.get("openai-version", ""), + "response.headers.ratelimitLimitRequests": check_rate_limit_header( + response_headers, "x-ratelimit-limit-requests", True + ), + "response.headers.ratelimitLimitTokens": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens", True + ), + "response.headers.ratelimitResetTokens": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens", False + ), + "response.headers.ratelimitResetRequests": check_rate_limit_header( + response_headers, "x-ratelimit-reset-requests", False + ), + "response.headers.ratelimitRemainingTokens": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens", True + ), + "response.headers.ratelimitRemainingRequests": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-requests", True + ), + "vendor": "openAI", + "ingest_source": "Python", + } + + transaction.record_ml_event("LlmEmbedding", embedding_dict) + return response + + +async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): + transaction = current_transaction() + + if not transaction: + return await wrapped(*args, **kwargs) + + transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + + ft_name = callable_name(wrapped) + with FunctionTrace(ft_name) as ft: + try: + response = await wrapped(*args, **kwargs) + except Exception as exc: + error_attributes = openai_error_attributes(exc, kwargs) + exc._nr_message = error_attributes.pop("error.message") + ft.notice_error( + attributes=error_attributes, + ) + raise + + if not response: + return response + + conversation_id = transaction._custom_params.get("conversation_id", "") + + chat_completion_id = str(uuid.uuid4()) + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + response_headers = getattr(response, "_nr_response_headers", None) + response_model = response.get("model", "") + settings = transaction.settings if transaction.settings is not None else global_settings() + response_id = response.get("id") + request_id = response_headers.get("x-request-id", "") + + api_key = getattr(response, "api_key", None) + usage = response.get("usage") + total_tokens = "" + prompt_tokens = "" + completion_tokens = "" + if usage: + total_tokens = usage.get("total_tokens", "") + prompt_tokens = usage.get("prompt_tokens", "") + completion_tokens = usage.get("completion_tokens", "") + + messages = kwargs.get("messages", []) + choices = response.get("choices", []) + + chat_completion_summary_dict = { + "id": chat_completion_id, + "appName": settings.app_name, + "conversation_id": conversation_id, + "request_id": request_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction._transaction_id, + "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", + "duration": ft.duration, + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "response.model": response_model, + "response.organization": getattr(response, "organization", ""), + "response.usage.completion_tokens": completion_tokens, + "response.usage.total_tokens": total_tokens, + "response.usage.prompt_tokens": prompt_tokens, + "response.number_of_messages": len(messages) + len(choices), + "request.temperature": kwargs.get("temperature", ""), + "request.max_tokens": kwargs.get("max_tokens", ""), + "response.choices.finish_reason": choices[0].get("finish_reason", "") if choices else "", + "response.api_type": getattr(response, "api_type", ""), + "response.headers.llmVersion": response_headers.get("openai-version", ""), + "response.headers.ratelimitLimitRequests": check_rate_limit_header( + response_headers, "x-ratelimit-limit-requests", True + ), + "response.headers.ratelimitLimitTokens": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens", True + ), + "response.headers.ratelimitResetTokens": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens", False + ), + "response.headers.ratelimitResetRequests": check_rate_limit_header( + response_headers, "x-ratelimit-reset-requests", False + ), + "response.headers.ratelimitRemainingTokens": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens", True + ), + "response.headers.ratelimitRemainingRequests": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-requests", True + ), + "vendor": "openAI", + "ingest_source": "Python", + } + + transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + message_list = list(messages) + if choices: + message_list.extend([choices[0].message]) + + message_ids = create_chat_completion_message_event( + transaction, + settings.app_name, + message_list, + chat_completion_id, + span_id, + trace_id, + response_model, + response_id, + request_id, + conversation_id, + ) + + # Cache message ids on transaction for retrieval after open ai call completion. + if not hasattr(transaction, "_nr_message_ids"): + transaction._nr_message_ids = {} + transaction._nr_message_ids[response_id] = message_ids + + return response + + +def wrap_convert_to_openai_object(wrapped, instance, args, kwargs): + resp = args[0] + returned_response = wrapped(*args, **kwargs) + + if isinstance(resp, openai.openai_response.OpenAIResponse): + setattr(returned_response, "_nr_response_headers", getattr(resp, "_headers", {})) + + return returned_response + + +def instrument_openai_util(module): + wrap_function_wrapper(module, "convert_to_openai_object", wrap_convert_to_openai_object) + + +def instrument_openai_api_resources_embedding(module): + if hasattr(module.Embedding, "create"): + wrap_function_wrapper(module, "Embedding.create", wrap_embedding_create) + if hasattr(module.Embedding, "acreate"): + wrap_function_wrapper(module, "Embedding.acreate", wrap_embedding_acreate) + + +def instrument_openai_api_resources_chat_completion(module): + if hasattr(module.ChatCompletion, "create"): + wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_create) + if hasattr(module.ChatCompletion, "acreate"): + wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_acreate) diff --git a/tests/agent_features/conftest.py b/tests/agent_features/conftest.py index bd6aa6c2ab..b8c8972d34 100644 --- a/tests/agent_features/conftest.py +++ b/tests/agent_features/conftest.py @@ -49,6 +49,7 @@ "test_asgi_browser.py", "test_asgi_distributed_tracing.py", "test_asgi_w3c_trace_context.py", + "test_ml_events.py", ] else: from testing_support.fixture.event_loop import event_loop diff --git a/tests/agent_features/test_exception_messages.py b/tests/agent_features/test_exception_messages.py index e9944f9205..55ff30cac9 100644 --- a/tests/agent_features/test_exception_messages.py +++ b/tests/agent_features/test_exception_messages.py @@ -13,29 +13,38 @@ # See the License for the specific language governing permissions and # limitations under the License. -import six import pytest +import six +from testing_support.fixtures import ( + reset_core_stats_engine, + set_default_encoding, + validate_application_exception_message, + validate_transaction_exception_message, +) from newrelic.api.application import application_instance as application from newrelic.api.background_task import background_task from newrelic.api.time_trace import notice_error -from testing_support.fixtures import (validate_transaction_exception_message, - set_default_encoding, validate_application_exception_message, - reset_core_stats_engine) - +# Turn off black formatting for this section of the code. +# While Python 2 has been EOL'd since 2020, New Relic still +# supports it and therefore these messages need to keep this +# specific formatting. +# fmt: off UNICODE_MESSAGE = u'I💜🐍' UNICODE_ENGLISH = u'I love python' BYTES_ENGLISH = b'I love python' BYTES_UTF8_ENCODED = b'I\xf0\x9f\x92\x9c\xf0\x9f\x90\x8d' INCORRECTLY_DECODED_BYTES_PY2 = u'I\u00f0\u009f\u0092\u009c\u00f0\u009f\u0090\u008d' INCORRECTLY_DECODED_BYTES_PY3 = u"b'I\\xf0\\x9f\\x92\\x9c\\xf0\\x9f\\x90\\x8d'" +# fmt: on # =================== Exception messages during transaction ==================== # ---------------- Python 2 + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_transaction_exception_message(UNICODE_MESSAGE) @background_task() def test_py2_transaction_exception_message_unicode(): @@ -46,8 +55,9 @@ def test_py2_transaction_exception_message_unicode(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_transaction_exception_message(UNICODE_ENGLISH) @background_task() def test_py2_transaction_exception_message_unicode_english(): @@ -58,8 +68,9 @@ def test_py2_transaction_exception_message_unicode_english(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_transaction_exception_message(UNICODE_ENGLISH) @background_task() def test_py2_transaction_exception_message_bytes_english(): @@ -69,8 +80,9 @@ def test_py2_transaction_exception_message_bytes_english(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_transaction_exception_message(INCORRECTLY_DECODED_BYTES_PY2) @background_task() def test_py2_transaction_exception_message_bytes_non_english(): @@ -83,8 +95,9 @@ def test_py2_transaction_exception_message_bytes_non_english(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_transaction_exception_message(INCORRECTLY_DECODED_BYTES_PY2) @background_task() def test_py2_transaction_exception_message_bytes_implicit_encoding_non_english(): @@ -93,16 +106,16 @@ def test_py2_transaction_exception_message_bytes_implicit_encoding_non_english() MESSAGE IS WRONG. We do not expect it to work now, or in the future. """ try: - # Bytes literal with non-ascii compatible characters only allowed in # python 2 - raise ValueError('I💜🐍') + raise ValueError("I💜🐍") except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('utf-8') +@set_default_encoding("utf-8") @validate_transaction_exception_message(UNICODE_MESSAGE) @background_task() def test_py2_transaction_exception_message_unicode_utf8_encoding(): @@ -114,8 +127,9 @@ def test_py2_transaction_exception_message_unicode_utf8_encoding(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") -@set_default_encoding('utf-8') +@set_default_encoding("utf-8") @validate_transaction_exception_message(UNICODE_MESSAGE) @background_task() def test_py2_transaction_exception_message_bytes_utf8_encoding_non_english(): @@ -123,16 +137,17 @@ def test_py2_transaction_exception_message_bytes_utf8_encoding_non_english(): encoding is also utf-8. """ try: - # Bytes literal with non-ascii compatible characters only allowed in # python 2 - raise ValueError('I💜🐍') + raise ValueError("I💜🐍") except ValueError: notice_error() + # ---------------- Python 3 + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @validate_transaction_exception_message(UNICODE_MESSAGE) @background_task() @@ -144,6 +159,7 @@ def test_py3_transaction_exception_message_bytes_non_english_unicode(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @validate_transaction_exception_message(UNICODE_ENGLISH) @background_task() @@ -155,6 +171,7 @@ def test_py3_transaction_exception_message_unicode_english(): except ValueError: notice_error() + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @validate_transaction_exception_message(INCORRECTLY_DECODED_BYTES_PY3) @background_task() @@ -171,13 +188,15 @@ def test_py3_transaction_exception_message_bytes_non_english(): except ValueError: notice_error() + # =================== Exception messages outside transaction ==================== # ---------------- Python 2 + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_application_exception_message(UNICODE_MESSAGE) def test_py2_application_exception_message_unicode(): """Assert unicode message when using non-ascii characters is preserved, @@ -188,9 +207,10 @@ def test_py2_application_exception_message_unicode(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_application_exception_message(UNICODE_ENGLISH) def test_py2_application_exception_message_unicode_english(): """Assert unicode message when using ascii compatible characters preserved, @@ -201,9 +221,10 @@ def test_py2_application_exception_message_unicode_english(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_application_exception_message(UNICODE_ENGLISH) def test_py2_application_exception_message_bytes_english(): """Assert byte string of ascii characters decodes sensibly""" @@ -213,9 +234,10 @@ def test_py2_application_exception_message_bytes_english(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_application_exception_message(INCORRECTLY_DECODED_BYTES_PY2) def test_py2_application_exception_message_bytes_non_english(): """Assert known situation where (explicitly) utf-8 encoded byte string gets @@ -228,9 +250,10 @@ def test_py2_application_exception_message_bytes_non_english(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('ascii') +@set_default_encoding("ascii") @validate_application_exception_message(INCORRECTLY_DECODED_BYTES_PY2) def test_py2_application_exception_message_bytes_implicit_encoding_non_english(): """Assert known situation where (implicitly) utf-8 encoded byte string gets @@ -238,18 +261,18 @@ def test_py2_application_exception_message_bytes_implicit_encoding_non_english() MESSAGE IS WRONG. We do not expect it to work now, or in the future. """ try: - # Bytes literal with non-ascii compatible characters only allowed in # python 2 - raise ValueError('I💜🐍') + raise ValueError("I💜🐍") except ValueError: app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('utf-8') +@set_default_encoding("utf-8") @validate_application_exception_message(UNICODE_MESSAGE) def test_py2_application_exception_message_unicode_utf8_encoding(): """Assert unicode error message is preserved with sys non-default utf-8 @@ -261,26 +284,28 @@ def test_py2_application_exception_message_unicode_utf8_encoding(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY3, reason="Testing Python 2 string behavior") @reset_core_stats_engine() -@set_default_encoding('utf-8') +@set_default_encoding("utf-8") @validate_application_exception_message(UNICODE_MESSAGE) def test_py2_application_exception_message_bytes_utf8_encoding_non_english(): """Assert utf-8 encoded byte produces correct exception message when sys encoding is also utf-8. """ try: - # Bytes literal with non-ascii compatible characters only allowed in # python 2 - raise ValueError('I💜🐍') + raise ValueError("I💜🐍") except ValueError: app = application() notice_error(application=app) + # ---------------- Python 3 + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @reset_core_stats_engine() @validate_application_exception_message(UNICODE_MESSAGE) @@ -293,6 +318,7 @@ def test_py3_application_exception_message_bytes_non_english_unicode(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @reset_core_stats_engine() @validate_application_exception_message(UNICODE_ENGLISH) @@ -305,6 +331,7 @@ def test_py3_application_exception_message_unicode_english(): app = application() notice_error(application=app) + @pytest.mark.skipif(six.PY2, reason="Testing Python 3 string behavior") @reset_core_stats_engine() @validate_application_exception_message(INCORRECTLY_DECODED_BYTES_PY3) @@ -321,3 +348,15 @@ def test_py3_application_exception_message_bytes_non_english(): except ValueError: app = application() notice_error(application=app) + + +@reset_core_stats_engine() +@validate_application_exception_message("My custom message") +def test_nr_message_exception_attr_override(): + """Override the message using the _nr_message attribute.""" + try: + raise ValueError("Original error message") + except ValueError as e: + e._nr_message = "My custom message" + app = application() + notice_error(application=app) diff --git a/tests/agent_features/test_ml_events.py b/tests/agent_features/test_ml_events.py index 5720224bbe..b2a77624fe 100644 --- a/tests/agent_features/test_ml_events.py +++ b/tests/agent_features/test_ml_events.py @@ -58,23 +58,94 @@ def core_app(collector_agent_registration): @validate_ml_event_payload( - [{"foo": "bar", "real_agent_id": "1234567", "event.domain": "newrelic.ml_events", "event.name": "InferenceEvent"}] + { + "apm": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "MyCustomEvent", + } + ] + } ) @reset_core_stats_engine() -def test_ml_event_payload_inside_transaction(core_app): +def test_ml_event_payload_noninference_event_inside_transaction(core_app): + @background_task(name="test_ml_event_payload_inside_transaction") + def _test(): + record_ml_event("MyCustomEvent", {"foo": "bar"}) + + _test() + core_app.harvest() + + +@validate_ml_event_payload( + { + "inference": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "InferenceEvent", + } + ] + } +) +@reset_core_stats_engine() +def test_ml_event_payload_inference_event_inside_transaction(core_app): + @background_task(name="test_ml_event_payload_inside_transaction") + def _test(): + record_ml_event("InferenceEvent", {"foo": "bar"}) + + _test() + core_app.harvest() + + +@validate_ml_event_payload( + { + "apm": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "MyCustomEvent", + } + ], + "inference": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "InferenceEvent", + } + ], + } +) +@reset_core_stats_engine() +def test_ml_event_payload_both_events_inside_transaction(core_app): @background_task(name="test_ml_event_payload_inside_transaction") def _test(): record_ml_event("InferenceEvent", {"foo": "bar"}) + record_ml_event("MyCustomEvent", {"foo": "bar"}) _test() core_app.harvest() @validate_ml_event_payload( - [{"foo": "bar", "real_agent_id": "1234567", "event.domain": "newrelic.ml_events", "event.name": "InferenceEvent"}] + { + "inference": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "InferenceEvent", + } + ] + } ) @reset_core_stats_engine() -def test_ml_event_payload_outside_transaction(core_app): +def test_ml_event_payload_inference_event_outside_transaction(core_app): def _test(): app = application() record_ml_event("InferenceEvent", {"foo": "bar"}, application=app) @@ -83,6 +154,59 @@ def _test(): core_app.harvest() +@validate_ml_event_payload( + { + "apm": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "MyCustomEvent", + } + ], + "inference": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "InferenceEvent", + } + ], + } +) +@reset_core_stats_engine() +def test_ml_event_payload_both_events_outside_transaction(core_app): + def _test(): + app = application() + record_ml_event("InferenceEvent", {"foo": "bar"}, application=app) + record_ml_event("MyCustomEvent", {"foo": "bar"}, application=app) + + _test() + core_app.harvest() + + +@validate_ml_event_payload( + { + "apm": [ + { + "foo": "bar", + "real_agent_id": "1234567", + "event.domain": "newrelic.ml_events", + "event.name": "MyCustomEvent", + } + ] + } +) +@reset_core_stats_engine() +def test_ml_event_payload_noninference_event_outside_transaction(core_app): + def _test(): + app = application() + record_ml_event("MyCustomEvent", {"foo": "bar"}, application=app) + + _test() + core_app.harvest() + + @pytest.mark.parametrize( "params,expected", [ @@ -102,6 +226,62 @@ def _test(): _test() +@reset_core_stats_engine() +def test_record_ml_event_truncation_inside_transaction(): + @validate_ml_events([(_intrinsics, {"a": "a" * 4095})]) + @background_task() + def _test(): + record_ml_event("LabelEvent", {"a": "a" * 4100}) + + _test() + + +@reset_core_stats_engine() +def test_record_ml_event_truncation_outside_transaction(): + @validate_ml_events_outside_transaction([(_intrinsics, {"a": "a" * 4095})]) + def _test(): + app = application() + record_ml_event("LabelEvent", {"a": "a" * 4100}, application=app) + + _test() + + +@reset_core_stats_engine() +def test_record_ml_event_max_num_attrs(): + too_many_attrs_event = {} + for i in range(65): + too_many_attrs_event[str(i)] = str(i) + + max_attrs_event = {} + for i in range(64): + max_attrs_event[str(i)] = str(i) + + @validate_ml_events([(_intrinsics, max_attrs_event)]) + @background_task() + def _test(): + record_ml_event("LabelEvent", too_many_attrs_event) + + _test() + + +@reset_core_stats_engine() +def test_record_ml_event_max_num_attrs_outside_transaction(): + too_many_attrs_event = {} + for i in range(65): + too_many_attrs_event[str(i)] = str(i) + + max_attrs_event = {} + for i in range(64): + max_attrs_event[str(i)] = str(i) + + @validate_ml_events_outside_transaction([(_intrinsics, max_attrs_event)]) + def _test(): + app = application() + record_ml_event("LabelEvent", too_many_attrs_event, application=app) + + _test() + + @pytest.mark.parametrize( "params,expected", [ @@ -151,6 +331,7 @@ def test_record_ml_event_outside_transaction_params_not_a_dict(): # Tests for ML Events configuration settings + @override_application_settings({"ml_insights_events.enabled": False}) @reset_core_stats_engine() @validate_ml_event_count(count=0) diff --git a/tests/agent_features/test_record_llm_feedback_event.py b/tests/agent_features/test_record_llm_feedback_event.py new file mode 100644 index 0000000000..59921ff400 --- /dev/null +++ b/tests/agent_features/test_record_llm_feedback_event.py @@ -0,0 +1,95 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from testing_support.fixtures import reset_core_stats_engine +from testing_support.validators.validate_ml_event_count import validate_ml_event_count +from testing_support.validators.validate_ml_events import validate_ml_events + +from newrelic.api.background_task import background_task +from newrelic.api.ml_model import record_llm_feedback_event + + +@reset_core_stats_engine() +def test_record_llm_feedback_event_all_args_supplied(): + llm_feedback_all_args_recorded_events = [ + ( + {"type": "LlmFeedbackMessage"}, + { + "id": None, + "category": "informative", + "rating": 1, + "message_id": "message_id", + "request_id": "request_id", + "conversation_id": "conversation_id", + "ingest_source": "Python", + "message": "message", + "foo": "bar", + }, + ), + ] + + @validate_ml_events(llm_feedback_all_args_recorded_events) + @background_task() + def _test(): + record_llm_feedback_event( + rating=1, + message_id="message_id", + category="informative", + request_id="request_id", + conversation_id="conversation_id", + message="message", + metadata={"foo": "bar"}, + ) + + _test() + + +@reset_core_stats_engine() +def test_record_llm_feedback_event_required_args_supplied(): + llm_feedback_required_args_recorded_events = [ + ( + {"type": "LlmFeedbackMessage"}, + { + "id": None, + "category": "", + "rating": "Good", + "message_id": "message_id", + "request_id": "", + "conversation_id": "", + "ingest_source": "Python", + "message": "", + }, + ), + ] + + @validate_ml_events(llm_feedback_required_args_recorded_events) + @background_task() + def _test(): + record_llm_feedback_event(message_id="message_id", rating="Good") + + _test() + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_record_llm_feedback_event_outside_txn(): + record_llm_feedback_event( + rating="Good", + message_id="message_id", + category="informative", + request_id="request_id", + conversation_id="conversation_id", + message="message", + metadata={"foo": "bar"}, + ) diff --git a/tests/external_boto3/conftest.py b/tests/external_boto3/conftest.py deleted file mode 100644 index 90d82f0072..0000000000 --- a/tests/external_boto3/conftest.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import pytest - -from testing_support.fixtures import collector_agent_registration_fixture, collector_available_fixture # noqa: F401; pylint: disable=W0611 - - -_default_settings = { - 'transaction_tracer.explain_threshold': 0.0, - 'transaction_tracer.transaction_threshold': 0.0, - 'transaction_tracer.stack_trace_threshold': 0.0, - 'debug.log_data_collector_payloads': True, - 'debug.record_transaction_failure': True, -} - -collector_agent_registration = collector_agent_registration_fixture( - app_name='Python Agent Test (external_boto3)', - default_settings=_default_settings) diff --git a/tests/external_botocore/_mock_external_bedrock_server.py b/tests/external_botocore/_mock_external_bedrock_server.py new file mode 100644 index 0000000000..da5ff68dd9 --- /dev/null +++ b/tests/external_botocore/_mock_external_bedrock_server.py @@ -0,0 +1,3461 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import re + +from testing_support.mock_external_http_server import MockExternalHTTPServer + +# This defines an external server test apps can make requests to instead of +# the real Bedrock backend. This provides 3 features: +# +# 1) This removes dependencies on external websites. +# 2) Provides a better mechanism for making an external call in a test app than +# simple calling another endpoint the test app makes available because this +# server will not be instrumented meaning we don't have to sort through +# transactions to separate the ones created in the test app and the ones +# created by an external call. +# 3) This app runs on a separate thread meaning it won't block the test app. + +RESPONSES = { + "ai21.j2-mid-v1::What is 212 degrees Fahrenheit converted to Celsius?": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "c863d9fc-888b-421c-a175-ac5256baec62"}, + 200, + { + "id": 1234, + "prompt": { + "text": "What is 212 degrees Fahrenheit converted to Celsius?", + "tokens": [ + { + "generatedToken": { + "token": "▁What▁is", + "logprob": -7.446773529052734, + "raw_logprob": -7.446773529052734, + }, + "topTokens": None, + "textRange": {"start": 0, "end": 7}, + }, + { + "generatedToken": { + "token": "▁", + "logprob": -3.8046724796295166, + "raw_logprob": -3.8046724796295166, + }, + "topTokens": None, + "textRange": {"start": 7, "end": 8}, + }, + { + "generatedToken": { + "token": "212", + "logprob": -9.287349700927734, + "raw_logprob": -9.287349700927734, + }, + "topTokens": None, + "textRange": {"start": 8, "end": 11}, + }, + { + "generatedToken": { + "token": "▁degrees▁Fahrenheit", + "logprob": -7.953181743621826, + "raw_logprob": -7.953181743621826, + }, + "topTokens": None, + "textRange": {"start": 11, "end": 30}, + }, + { + "generatedToken": { + "token": "▁converted▁to", + "logprob": -6.168096542358398, + "raw_logprob": -6.168096542358398, + }, + "topTokens": None, + "textRange": {"start": 30, "end": 43}, + }, + { + "generatedToken": { + "token": "▁Celsius", + "logprob": -0.09790332615375519, + "raw_logprob": -0.09790332615375519, + }, + "topTokens": None, + "textRange": {"start": 43, "end": 51}, + }, + { + "generatedToken": { + "token": "?", + "logprob": -6.5795369148254395, + "raw_logprob": -6.5795369148254395, + }, + "topTokens": None, + "textRange": {"start": 51, "end": 52}, + }, + ], + }, + "completions": [ + { + "data": { + "text": "\n212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "tokens": [ + { + "generatedToken": { + "token": "<|newline|>", + "logprob": -1.6689286894688848e-06, + "raw_logprob": -0.00015984688070602715, + }, + "topTokens": None, + "textRange": {"start": 0, "end": 1}, + }, + { + "generatedToken": { + "token": "▁", + "logprob": -0.03473362699151039, + "raw_logprob": -0.11261807382106781, + }, + "topTokens": None, + "textRange": {"start": 1, "end": 1}, + }, + { + "generatedToken": { + "token": "212", + "logprob": -0.003316262038424611, + "raw_logprob": -0.019686665385961533, + }, + "topTokens": None, + "textRange": {"start": 1, "end": 4}, + }, + { + "generatedToken": { + "token": "▁degrees▁Fahrenheit", + "logprob": -0.003579758107662201, + "raw_logprob": -0.03144374489784241, + }, + "topTokens": None, + "textRange": {"start": 4, "end": 23}, + }, + { + "generatedToken": { + "token": "▁is▁equal▁to", + "logprob": -0.0027733694296330214, + "raw_logprob": -0.027207009494304657, + }, + "topTokens": None, + "textRange": {"start": 23, "end": 35}, + }, + { + "generatedToken": { + "token": "▁", + "logprob": -0.0003392120997887105, + "raw_logprob": -0.005458095110952854, + }, + "topTokens": None, + "textRange": {"start": 35, "end": 36}, + }, + { + "generatedToken": { + "token": "100", + "logprob": -2.145764938177308e-06, + "raw_logprob": -0.00012730741582345217, + }, + "topTokens": None, + "textRange": {"start": 36, "end": 39}, + }, + { + "generatedToken": { + "token": "▁degrees▁Celsius", + "logprob": -0.31207239627838135, + "raw_logprob": -0.402545303106308, + }, + "topTokens": None, + "textRange": {"start": 39, "end": 55}, + }, + { + "generatedToken": { + "token": ".", + "logprob": -0.023684674873948097, + "raw_logprob": -0.0769972875714302, + }, + "topTokens": None, + "textRange": {"start": 55, "end": 56}, + }, + { + "generatedToken": { + "token": "<|endoftext|>", + "logprob": -0.0073706600815057755, + "raw_logprob": -0.06265579164028168, + }, + "topTokens": None, + "textRange": {"start": 56, "end": 56}, + }, + ], + }, + "finishReason": {"reason": "endoftext"}, + } + ], + }, + ], + "amazon.titan-embed-g1-text-02::This is an embedding test.": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "b10ac895-eae3-4f07-b926-10b2866c55ed"}, + 200, + { + "embedding": [ + -0.14160156, + 0.034423828, + 0.54296875, + 0.10986328, + 0.053466797, + 0.3515625, + 0.12988281, + -0.0002708435, + -0.21484375, + 0.060302734, + 0.58984375, + -0.5859375, + 0.52734375, + 0.82421875, + -0.91015625, + -0.19628906, + 0.45703125, + 0.609375, + -0.67578125, + 0.39453125, + -0.46875, + -0.25390625, + -0.21191406, + 0.114746094, + 0.31640625, + -0.41015625, + -0.32617188, + -0.43554688, + 0.4765625, + -0.4921875, + 0.40429688, + 0.06542969, + 0.859375, + -0.008056641, + -0.19921875, + 0.072753906, + 0.33203125, + 0.69921875, + 0.39453125, + 0.15527344, + 0.08886719, + -0.25, + 0.859375, + 0.22949219, + -0.19042969, + 0.13769531, + -0.078125, + 0.41210938, + 0.875, + 0.5234375, + 0.59765625, + -0.22949219, + -0.22558594, + -0.47460938, + 0.37695312, + 0.51953125, + -0.5703125, + 0.46679688, + 0.43554688, + 0.17480469, + -0.080566406, + -0.16699219, + -0.734375, + -1.0625, + -0.33984375, + 0.390625, + -0.18847656, + -0.5234375, + -0.48828125, + 0.44921875, + -0.09814453, + -0.3359375, + 0.087402344, + 0.36914062, + 1.3203125, + 0.25585938, + 0.14746094, + -0.059570312, + -0.15820312, + -0.037353516, + -0.61328125, + -0.6484375, + -0.35351562, + 0.55078125, + -0.26953125, + 0.90234375, + 0.3671875, + 0.31054688, + 0.00014019012, + -0.171875, + 0.025512695, + 0.5078125, + 0.11621094, + 0.33203125, + 0.8125, + -0.3046875, + -1.078125, + -0.5703125, + 0.26171875, + -0.4609375, + 0.203125, + 0.44726562, + -0.5078125, + 0.41601562, + -0.1953125, + 0.028930664, + -0.57421875, + 0.2265625, + 0.13574219, + -0.040039062, + -0.22949219, + -0.515625, + -0.19042969, + -0.30078125, + 0.10058594, + -0.66796875, + 0.6015625, + 0.296875, + -0.765625, + -0.87109375, + 0.2265625, + 0.068847656, + -0.088378906, + -0.1328125, + -0.796875, + -0.37304688, + 0.47460938, + -0.3515625, + -0.8125, + -0.32226562, + 0.265625, + 0.3203125, + -0.4140625, + -0.49023438, + 0.859375, + -0.19140625, + -0.6328125, + 0.10546875, + -0.5625, + 0.66015625, + 0.26171875, + -0.2109375, + 0.421875, + -0.82421875, + 0.29296875, + 0.17773438, + 0.24023438, + 0.5078125, + -0.49804688, + -0.10205078, + 0.10498047, + -0.36132812, + -0.47460938, + -0.20996094, + 0.010070801, + -0.546875, + 0.66796875, + -0.123046875, + -0.75390625, + 0.19628906, + 0.17480469, + 0.18261719, + -0.96875, + -0.26171875, + 0.4921875, + -0.40039062, + 0.296875, + 0.1640625, + -0.20507812, + -0.36132812, + 0.76171875, + -1.234375, + -0.625, + 0.060058594, + -0.09375, + -0.14746094, + 1.09375, + 0.057861328, + 0.22460938, + -0.703125, + 0.07470703, + 0.23828125, + -0.083984375, + -0.54296875, + 0.5546875, + -0.5, + -0.390625, + 0.106933594, + 0.6640625, + 0.27734375, + -0.953125, + 0.35351562, + -0.7734375, + -0.77734375, + 0.16503906, + -0.42382812, + 0.36914062, + 0.020141602, + -1.3515625, + 0.18847656, + 0.13476562, + -0.034179688, + -0.03930664, + -0.03857422, + -0.027954102, + 0.73828125, + -0.18945312, + -0.09814453, + -0.46289062, + 0.36914062, + 0.033203125, + 0.020874023, + -0.703125, + 0.91796875, + 0.38671875, + 0.625, + -0.19335938, + -0.16796875, + -0.58203125, + 0.21386719, + -0.032470703, + -0.296875, + -0.15625, + -0.1640625, + -0.74609375, + 0.328125, + 0.5546875, + -0.1953125, + 1.0546875, + 0.171875, + -0.099609375, + 0.5234375, + 0.05078125, + -0.35742188, + -0.2734375, + -1.3203125, + -0.8515625, + -0.16015625, + 0.01574707, + 0.29296875, + 0.18457031, + -0.265625, + 0.048339844, + 0.045654297, + -0.32226562, + 0.087890625, + -0.0047302246, + 0.38671875, + 0.10644531, + -0.06225586, + 1.03125, + 0.94140625, + -0.3203125, + 0.20800781, + -1.171875, + 0.48046875, + -0.091796875, + 0.20800781, + -0.1328125, + -0.20507812, + 0.28125, + -0.47070312, + -0.09033203, + 0.0013809204, + -0.08203125, + 0.43359375, + -0.03100586, + -0.060791016, + -0.53515625, + -1.46875, + 0.000101566315, + 0.515625, + 0.40625, + -0.10498047, + -0.15820312, + -0.009460449, + -0.77734375, + -0.5859375, + 0.9765625, + 0.099609375, + 0.51953125, + 0.38085938, + -0.09667969, + -0.100097656, + -0.5, + -1.3125, + -0.18066406, + -0.099121094, + 0.26171875, + -0.14453125, + -0.546875, + 0.17578125, + 0.484375, + 0.765625, + 0.45703125, + 0.2734375, + 0.0028076172, + 0.17089844, + -0.32421875, + -0.37695312, + 0.30664062, + -0.48046875, + 0.07128906, + 0.031982422, + -0.31054688, + -0.055419922, + -0.29296875, + 0.3359375, + -0.296875, + 0.47851562, + -0.05126953, + 0.18457031, + -0.01953125, + -0.35742188, + 0.017944336, + -0.25, + 0.10595703, + 0.17382812, + -0.73828125, + 0.36914062, + -0.15234375, + -0.8125, + 0.17382812, + 0.048095703, + 0.5625, + -0.33789062, + 0.023071289, + -0.21972656, + 0.16015625, + 0.032958984, + -1.1171875, + -0.984375, + 0.83984375, + 0.009033203, + -0.042236328, + -0.46484375, + -0.08203125, + 0.44726562, + -0.765625, + -0.3984375, + -0.40820312, + -0.234375, + 0.044189453, + 0.119628906, + -0.7578125, + -0.55078125, + -0.4453125, + 0.7578125, + 0.34960938, + 0.96484375, + 0.35742188, + 0.36914062, + -0.35351562, + -0.36132812, + 1.109375, + 0.5859375, + 0.85546875, + -0.10644531, + -0.6953125, + -0.0066833496, + 0.042236328, + -0.06689453, + 0.36914062, + 0.9765625, + -0.3046875, + 0.59765625, + -0.6640625, + 0.21484375, + -0.07128906, + 1.1328125, + -0.51953125, + 0.86328125, + -0.11328125, + 0.15722656, + -0.36328125, + -0.04638672, + 1.4375, + 0.18457031, + -0.18359375, + 0.10595703, + -0.49023438, + -0.07324219, + -0.73046875, + -0.119140625, + 0.021118164, + 0.4921875, + -0.46875, + 0.28710938, + 0.3359375, + 0.11767578, + -0.2109375, + -0.14550781, + 0.39648438, + -0.27734375, + 0.48046875, + 0.12988281, + 0.45507812, + -0.375, + -0.84765625, + 0.25585938, + -0.36523438, + 0.8046875, + 0.42382812, + -0.24511719, + 0.54296875, + 0.71875, + 0.010009766, + -0.04296875, + 0.083984375, + -0.52734375, + 0.13964844, + -0.27539062, + -0.30273438, + 1.1484375, + -0.515625, + -0.19335938, + 0.58984375, + 0.049072266, + 0.703125, + -0.04272461, + 0.5078125, + 0.34960938, + -0.3359375, + -0.47460938, + 0.049316406, + 0.36523438, + 0.7578125, + -0.022827148, + -0.71484375, + 0.21972656, + 0.09716797, + -0.203125, + -0.36914062, + 1.34375, + 0.34179688, + 0.46679688, + 1.078125, + 0.26171875, + 0.41992188, + 0.22363281, + -0.515625, + -0.5703125, + 0.13378906, + 0.26757812, + -0.22558594, + -0.5234375, + 0.06689453, + 0.08251953, + -0.625, + 0.16796875, + 0.43164062, + -0.55859375, + 0.28125, + 0.078125, + 0.6328125, + 0.23242188, + -0.064941406, + -0.004486084, + -0.20703125, + 0.2734375, + 0.453125, + -0.734375, + 0.04272461, + 0.36132812, + -0.19628906, + -0.12402344, + 1.3515625, + 0.25585938, + 0.4921875, + -0.29296875, + -0.58984375, + 0.021240234, + -0.044677734, + 0.7578125, + -0.7890625, + 0.10253906, + -0.15820312, + -0.5078125, + -0.39453125, + -0.453125, + 0.35742188, + 0.921875, + 0.44335938, + -0.49804688, + 0.44335938, + 0.31445312, + 0.58984375, + -1.0078125, + -0.22460938, + 0.24121094, + 0.87890625, + 0.66015625, + -0.390625, + -0.05053711, + 0.059570312, + 0.36132812, + -0.00038719177, + -0.017089844, + 0.62890625, + 0.203125, + 0.17480469, + 0.025512695, + 0.47460938, + 0.3125, + 1.140625, + 0.32421875, + -0.057861328, + 0.36914062, + -0.7265625, + -0.51953125, + 0.26953125, + 0.42773438, + 0.064453125, + 0.6328125, + 0.27148438, + -0.11767578, + 0.66796875, + -0.38671875, + 0.5234375, + -0.59375, + 0.5078125, + 0.008239746, + -0.34179688, + -0.27539062, + 0.5234375, + 1.296875, + 0.29492188, + -0.010986328, + -0.41210938, + 0.59375, + 0.061767578, + -0.33398438, + -2.03125, + 0.87890625, + -0.010620117, + 0.53125, + 0.14257812, + -0.515625, + -1.03125, + 0.578125, + 0.1875, + 0.44335938, + -0.33203125, + -0.36328125, + -0.3203125, + 0.29296875, + -0.8203125, + 0.41015625, + -0.48242188, + 0.66015625, + 0.5625, + -0.16503906, + -0.54296875, + -0.38085938, + 0.26171875, + 0.62109375, + 0.29101562, + -0.31054688, + 0.23730469, + -0.8515625, + 0.5234375, + 0.15332031, + 0.52734375, + -0.079589844, + -0.080566406, + -0.15527344, + -0.022827148, + 0.030517578, + -0.1640625, + -0.421875, + 0.09716797, + 0.03930664, + -0.055908203, + -0.546875, + -0.47851562, + 0.091796875, + 0.32226562, + -0.94140625, + -0.04638672, + -1.203125, + -0.39648438, + 0.45507812, + 0.296875, + -0.45703125, + 0.37890625, + -0.122558594, + 0.28320312, + -0.01965332, + -0.11669922, + -0.34570312, + -0.53515625, + -0.091308594, + -0.9375, + -0.32617188, + 0.095214844, + -0.4765625, + 0.37890625, + -0.859375, + 1.1015625, + -0.08935547, + 0.46484375, + -0.19238281, + 0.7109375, + 0.040039062, + -0.5390625, + 0.22363281, + -0.70703125, + 0.4921875, + -0.119140625, + -0.26757812, + -0.08496094, + 0.0859375, + -0.00390625, + -0.013366699, + -0.03955078, + 0.07421875, + -0.13085938, + 0.29101562, + -0.12109375, + 0.45703125, + 0.021728516, + 0.38671875, + -0.3671875, + -0.52734375, + -0.115722656, + 0.125, + 0.5703125, + -1.234375, + 0.06298828, + -0.55859375, + 0.60546875, + 0.8125, + -0.0032958984, + -0.068359375, + -0.21191406, + 0.56640625, + 0.17285156, + -0.3515625, + 0.36328125, + -0.99609375, + 0.43554688, + -0.1015625, + 0.07080078, + -0.66796875, + 1.359375, + 0.41601562, + 0.15917969, + 0.17773438, + -0.28710938, + 0.021850586, + -0.46289062, + 0.17578125, + -0.03955078, + -0.026855469, + 0.5078125, + -0.65625, + 0.0012512207, + 0.044433594, + -0.18652344, + 0.4921875, + -0.75390625, + 0.0072021484, + 0.4375, + -0.31445312, + 0.20214844, + 0.15039062, + -0.63671875, + -0.296875, + -0.375, + -0.027709961, + 0.013427734, + 0.17089844, + 0.89453125, + 0.11621094, + -0.43945312, + -0.30859375, + 0.02709961, + 0.23242188, + -0.64453125, + -0.859375, + 0.22167969, + -0.023071289, + -0.052734375, + 0.3671875, + -0.18359375, + 0.81640625, + -0.11816406, + 0.028320312, + 0.19042969, + 0.012817383, + -0.43164062, + 0.55859375, + -0.27929688, + 0.14257812, + -0.140625, + -0.048583984, + -0.014526367, + 0.35742188, + 0.22753906, + 0.13183594, + 0.04638672, + 0.03930664, + -0.29296875, + -0.2109375, + -0.16308594, + -0.48046875, + -0.13378906, + -0.39257812, + 0.29296875, + -0.047851562, + -0.5546875, + 0.08300781, + -0.14941406, + -0.07080078, + 0.12451172, + 0.1953125, + -0.51171875, + -0.048095703, + 0.1953125, + -0.37695312, + 0.46875, + -0.084472656, + 0.19042969, + -0.39453125, + 0.69921875, + -0.0065307617, + 0.25390625, + -0.16992188, + -0.5078125, + 0.016845703, + 0.27929688, + -0.22070312, + 0.671875, + 0.18652344, + 0.25, + -0.046875, + -0.012023926, + -0.36523438, + 0.36523438, + -0.11279297, + 0.421875, + 0.079589844, + -0.100097656, + 0.37304688, + 0.29882812, + -0.10546875, + -0.36523438, + 0.040039062, + 0.546875, + 0.12890625, + -0.06542969, + -0.38085938, + -0.35742188, + -0.6484375, + -0.28515625, + 0.0107421875, + -0.055664062, + 0.45703125, + 0.33984375, + 0.26367188, + -0.23144531, + 0.012878418, + -0.875, + 0.11035156, + 0.33984375, + 0.203125, + 0.38867188, + 0.24902344, + -0.37304688, + -0.98046875, + -0.122558594, + -0.17871094, + -0.09277344, + 0.1796875, + 0.4453125, + -0.66796875, + 0.78515625, + 0.12988281, + 0.35546875, + 0.44140625, + 0.58984375, + 0.29492188, + 0.7734375, + -0.21972656, + -0.40234375, + -0.22265625, + 0.18359375, + 0.54296875, + 0.17382812, + 0.59375, + -0.390625, + -0.92578125, + -0.017456055, + -0.25, + 0.73828125, + 0.7578125, + -0.3828125, + -0.25976562, + 0.049072266, + 0.046875, + -0.3515625, + 0.30078125, + -1.03125, + -0.48828125, + 0.0017929077, + -0.26171875, + 0.20214844, + 0.29882812, + 0.064941406, + 0.21484375, + -0.55078125, + -0.021362305, + 0.12988281, + 0.27148438, + 0.38867188, + -0.19726562, + -0.55078125, + 0.1640625, + 0.32226562, + -0.72265625, + 0.36132812, + 1.21875, + -0.22070312, + -0.32421875, + -0.29882812, + 0.0024414062, + 0.19921875, + 0.734375, + 0.16210938, + 0.17871094, + -0.19140625, + 0.38476562, + -0.06591797, + -0.47070312, + -0.040039062, + -0.33007812, + -0.07910156, + -0.2890625, + 0.00970459, + 0.12695312, + -0.12060547, + -0.18847656, + 1.015625, + -0.032958984, + 0.12451172, + -0.38476562, + 0.063964844, + 1.0859375, + 0.067871094, + -0.24511719, + 0.125, + 0.10546875, + -0.22460938, + -0.29101562, + 0.24414062, + -0.017944336, + -0.15625, + -0.60546875, + -0.25195312, + -0.46875, + 0.80859375, + -0.34960938, + 0.42382812, + 0.796875, + 0.296875, + -0.067871094, + 0.39453125, + 0.07470703, + 0.033935547, + 0.24414062, + 0.32617188, + 0.023925781, + 0.73046875, + 0.2109375, + -0.43164062, + 0.14453125, + 0.63671875, + 0.21972656, + -0.1875, + -0.18066406, + -0.22167969, + -1.3359375, + 0.52734375, + -0.40625, + -0.12988281, + 0.17480469, + -0.18066406, + 0.58984375, + -0.32421875, + -0.13476562, + 0.39257812, + -0.19238281, + 0.068359375, + 0.7265625, + -0.7109375, + -0.125, + 0.328125, + 0.34179688, + -0.48828125, + -0.10058594, + -0.83984375, + 0.30273438, + 0.008239746, + -1.390625, + 0.171875, + 0.34960938, + 0.44921875, + 0.22167969, + 0.60546875, + -0.36914062, + -0.028808594, + -0.19921875, + 0.6875, + 0.52734375, + -0.07421875, + 0.35546875, + 0.546875, + 0.08691406, + 0.23339844, + -0.984375, + -0.20507812, + 0.08544922, + 0.453125, + -0.07421875, + -0.953125, + 0.74609375, + -0.796875, + 0.47851562, + 0.81640625, + -0.44921875, + -0.33398438, + -0.54296875, + 0.46484375, + -0.390625, + -0.24121094, + -0.0115356445, + 1.1328125, + 1.0390625, + 0.6484375, + 0.35742188, + -0.29492188, + -0.0007095337, + -0.060302734, + 0.21777344, + 0.15136719, + -0.6171875, + 0.11328125, + -0.025878906, + 0.19238281, + 0.140625, + 0.171875, + 0.25195312, + 0.10546875, + 0.0008354187, + -0.13476562, + -0.26953125, + 0.025024414, + -0.28320312, + -0.107910156, + 1.015625, + 0.05493164, + -0.12988281, + 0.30859375, + 0.22558594, + -0.60546875, + 0.11328125, + -1.203125, + 0.6484375, + 0.087402344, + 0.32226562, + 0.63671875, + -0.07714844, + -1.390625, + -0.71875, + -0.34179688, + -0.10546875, + -0.37304688, + -0.09863281, + -0.41210938, + -0.14941406, + 0.41210938, + -0.20898438, + 0.18261719, + 0.67578125, + 0.41601562, + 0.32617188, + 0.2421875, + -0.14257812, + -0.6796875, + 0.01953125, + 0.34179688, + 0.20800781, + -0.123046875, + 0.087402344, + 0.85546875, + 0.33984375, + 0.33203125, + -0.68359375, + 0.44921875, + 0.50390625, + 0.083496094, + 0.10888672, + -0.09863281, + 0.55078125, + 0.09765625, + -0.50390625, + 0.13378906, + -0.29882812, + 0.030761719, + -0.64453125, + 0.22949219, + 0.43945312, + 0.16503906, + 0.10888672, + -0.12792969, + -0.039794922, + -0.111328125, + -0.35742188, + 0.053222656, + -0.78125, + -0.4375, + 0.359375, + -0.88671875, + -0.21972656, + -0.053710938, + 0.91796875, + -0.10644531, + 0.55859375, + -0.7734375, + 0.5078125, + 0.46484375, + 0.32226562, + 0.16796875, + -0.28515625, + 0.045410156, + -0.45117188, + 0.38867188, + -0.33398438, + -0.5234375, + 0.296875, + 0.6015625, + 0.3515625, + -0.734375, + 0.3984375, + -0.08251953, + 0.359375, + -0.28515625, + -0.88671875, + 0.0051879883, + 0.045166016, + -0.7421875, + -0.36523438, + 0.140625, + 0.18066406, + -0.171875, + -0.15625, + -0.53515625, + 0.2421875, + -0.19140625, + -0.18066406, + 0.25390625, + 0.6875, + -0.01965332, + -0.33203125, + 0.29492188, + 0.107421875, + -0.048339844, + -0.82421875, + 0.52734375, + 0.78125, + 0.8203125, + -0.90625, + 0.765625, + 0.0390625, + 0.045410156, + 0.26367188, + -0.14355469, + -0.26367188, + 0.390625, + -0.10888672, + 0.33007812, + -0.5625, + 0.08105469, + -0.13769531, + 0.8515625, + -0.14453125, + 0.77734375, + -0.48046875, + -0.3515625, + -0.25390625, + -0.09277344, + 0.23925781, + -0.022338867, + -0.45898438, + 0.36132812, + -0.23828125, + 0.265625, + -0.48632812, + -0.46875, + -0.75390625, + 1.3125, + 0.78125, + -0.63671875, + -1.21875, + 0.5078125, + -0.27734375, + -0.118652344, + 0.041992188, + -0.14648438, + -0.8046875, + 0.21679688, + -0.79296875, + 0.28320312, + -0.09667969, + 0.42773438, + 0.49414062, + 0.44726562, + 0.21972656, + -0.02746582, + -0.03540039, + -0.14941406, + -0.515625, + -0.27929688, + 0.9609375, + -0.007598877, + 0.34765625, + -0.060546875, + -0.44726562, + 0.7421875, + 0.15332031, + 0.45117188, + -0.4921875, + 0.07080078, + 0.5625, + 0.3984375, + -0.20019531, + 0.014892578, + 0.63671875, + -0.0071411133, + 0.016357422, + 1.0625, + 0.049316406, + 0.18066406, + 0.09814453, + -0.52734375, + -0.359375, + -0.072265625, + -0.41992188, + 0.39648438, + 0.38671875, + -0.30273438, + -0.056640625, + -0.640625, + -0.44921875, + 0.49414062, + 0.29101562, + 0.49609375, + 0.40429688, + -0.10205078, + 0.49414062, + -0.28125, + -0.12695312, + -0.0022735596, + -0.37304688, + 0.122558594, + 0.07519531, + -0.12597656, + -0.38085938, + -0.19824219, + -0.40039062, + 0.56640625, + -1.140625, + -0.515625, + -0.17578125, + -0.765625, + -0.43945312, + 0.3359375, + -0.24707031, + 0.32617188, + -0.45117188, + -0.37109375, + 0.45117188, + -0.27539062, + -0.38867188, + 0.09082031, + 0.17675781, + 0.49414062, + 0.19921875, + 0.17480469, + 0.8515625, + -0.23046875, + -0.234375, + -0.28515625, + 0.10253906, + 0.29101562, + -0.3359375, + -0.203125, + 0.6484375, + 0.11767578, + -0.20214844, + -0.42382812, + 0.26367188, + 0.6328125, + 0.0059509277, + 0.08691406, + -1.5625, + -0.43554688, + 0.17675781, + 0.091796875, + -0.5234375, + -0.09863281, + 0.20605469, + 0.16601562, + -0.578125, + 0.017700195, + 0.41015625, + 1.03125, + -0.55078125, + 0.21289062, + -0.35351562, + 0.24316406, + -0.123535156, + 0.11035156, + -0.48242188, + -0.34179688, + 0.45117188, + 0.3125, + -0.071777344, + 0.12792969, + 0.55859375, + 0.063964844, + -0.21191406, + 0.01965332, + -1.359375, + -0.21582031, + -0.019042969, + 0.16308594, + -0.3671875, + -0.40625, + -1.0234375, + -0.21289062, + 0.24023438, + -0.28125, + 0.26953125, + -0.14550781, + -0.087890625, + 0.16113281, + -0.49804688, + -0.17675781, + -0.890625, + 0.27929688, + 0.484375, + 0.27148438, + 0.11816406, + 0.83984375, + 0.029052734, + -0.890625, + 0.66796875, + 0.78515625, + -0.953125, + 0.49414062, + -0.546875, + 0.106933594, + -0.08251953, + 0.2890625, + -0.1484375, + -0.85546875, + 0.32421875, + -0.0040893555, + -0.16601562, + -0.16699219, + 0.24414062, + -0.5078125, + 0.25390625, + -0.10253906, + 0.15625, + 0.140625, + -0.27539062, + -0.546875, + -0.5546875, + -0.71875, + 0.37304688, + 0.060058594, + -0.076171875, + 0.44921875, + 0.06933594, + -0.28710938, + -0.22949219, + 0.17578125, + 0.09814453, + 0.4765625, + -0.95703125, + -0.03540039, + 0.21289062, + -0.7578125, + -0.07373047, + 0.10546875, + 0.07128906, + 0.76171875, + 0.4296875, + -0.09375, + 0.27539062, + -0.55078125, + 0.29882812, + -0.42382812, + 0.32617188, + -0.39648438, + 0.12451172, + 0.16503906, + -0.22460938, + -0.65625, + -0.022094727, + 0.61328125, + -0.024780273, + 0.62109375, + -0.033447266, + 0.515625, + 0.12890625, + -0.21875, + -0.08642578, + 0.49804688, + -0.2265625, + -0.29296875, + 0.19238281, + 0.3515625, + -1.265625, + 0.57421875, + 0.20117188, + -0.28320312, + 0.1953125, + -0.30664062, + 0.2265625, + -0.11230469, + 0.83984375, + 0.111328125, + 0.265625, + 0.71484375, + -0.625, + 0.38867188, + 0.47070312, + -0.32617188, + -0.171875, + 1.0078125, + 0.19726562, + -0.118652344, + 0.63671875, + -0.068359375, + -0.25585938, + 0.4140625, + -0.29296875, + 0.21386719, + -0.064453125, + 0.15820312, + -0.89453125, + -0.16308594, + 0.48046875, + 0.14648438, + -0.5703125, + 0.84765625, + -0.19042969, + 0.03515625, + 0.42578125, + -0.27539062, + -0.5390625, + 0.95703125, + 0.2734375, + 0.16699219, + -0.328125, + 0.11279297, + 0.003250122, + 0.47265625, + -0.31640625, + 0.546875, + 0.55859375, + 0.06933594, + -0.61328125, + -0.16210938, + -0.375, + 0.100097656, + -0.088378906, + 0.12695312, + 0.079589844, + 0.123535156, + -1.0078125, + 0.6875, + 0.022949219, + -0.40039062, + -0.09863281, + 0.29101562, + -1.2890625, + -0.20996094, + 0.36328125, + -0.3515625, + 0.7890625, + 0.12207031, + 0.48046875, + -0.13671875, + -0.041015625, + 0.19824219, + 0.19921875, + 0.01171875, + -0.37695312, + -0.62890625, + 0.9375, + -0.671875, + 0.24609375, + 0.6484375, + -0.29101562, + 0.076171875, + 0.62109375, + -0.5546875, + 0.36523438, + 0.75390625, + -0.19140625, + -0.875, + -0.8203125, + -0.24414062, + -0.625, + 0.1796875, + -0.40039062, + 0.25390625, + -0.14550781, + -0.21679688, + -0.828125, + 0.3359375, + 0.43554688, + 0.55078125, + -0.44921875, + -0.28710938, + 0.24023438, + 0.18066406, + -0.6953125, + 0.020385742, + -0.11376953, + 0.13867188, + -0.92578125, + 0.33398438, + -0.328125, + 0.78125, + -0.45507812, + -0.07470703, + 0.34179688, + 0.07080078, + 0.76171875, + 0.37890625, + -0.10644531, + 0.90234375, + -0.21875, + -0.15917969, + -0.36132812, + 0.2109375, + -0.45703125, + -0.76953125, + 0.21289062, + 0.26367188, + 0.49804688, + 0.35742188, + -0.20019531, + 0.31054688, + 0.34179688, + 0.17089844, + -0.15429688, + 0.39648438, + -0.5859375, + 0.20996094, + -0.40039062, + 0.5703125, + -0.515625, + 0.5234375, + 0.049560547, + 0.328125, + 0.24804688, + 0.42578125, + 0.609375, + 0.19238281, + 0.27929688, + 0.19335938, + 0.78125, + -0.9921875, + 0.23925781, + -1.3828125, + -0.22949219, + -0.578125, + -0.13964844, + -0.17382812, + -0.011169434, + 0.26171875, + -0.73046875, + -1.4375, + 0.6953125, + -0.7421875, + 0.052246094, + 0.12207031, + 1.3046875, + 0.38867188, + 0.040283203, + -0.546875, + -0.0021514893, + 0.18457031, + -0.5546875, + -0.51171875, + -0.16308594, + -0.104003906, + -0.38867188, + -0.20996094, + -0.8984375, + 0.6015625, + -0.30078125, + -0.13769531, + 0.16113281, + 0.58203125, + -0.23730469, + -0.125, + -1.0234375, + 0.875, + -0.7109375, + 0.29101562, + 0.09667969, + -0.3203125, + -0.48046875, + 0.37890625, + 0.734375, + -0.28710938, + -0.29882812, + -0.05493164, + 0.34765625, + -0.84375, + 0.65625, + 0.578125, + -0.20019531, + 0.13769531, + 0.10058594, + -0.37109375, + 0.36523438, + -0.22167969, + 0.72265625, + ], + "inputTextTokenCount": 6, + }, + ], + "amazon.titan-embed-text-v1::This is an embedding test.": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "11233989-07e8-4ecb-9ba6-79601ba6d8cc"}, + 200, + { + "embedding": [ + -0.14160156, + 0.034423828, + 0.54296875, + 0.10986328, + 0.053466797, + 0.3515625, + 0.12988281, + -0.0002708435, + -0.21484375, + 0.060302734, + 0.58984375, + -0.5859375, + 0.52734375, + 0.82421875, + -0.91015625, + -0.19628906, + 0.45703125, + 0.609375, + -0.67578125, + 0.39453125, + -0.46875, + -0.25390625, + -0.21191406, + 0.114746094, + 0.31640625, + -0.41015625, + -0.32617188, + -0.43554688, + 0.4765625, + -0.4921875, + 0.40429688, + 0.06542969, + 0.859375, + -0.008056641, + -0.19921875, + 0.072753906, + 0.33203125, + 0.69921875, + 0.39453125, + 0.15527344, + 0.08886719, + -0.25, + 0.859375, + 0.22949219, + -0.19042969, + 0.13769531, + -0.078125, + 0.41210938, + 0.875, + 0.5234375, + 0.59765625, + -0.22949219, + -0.22558594, + -0.47460938, + 0.37695312, + 0.51953125, + -0.5703125, + 0.46679688, + 0.43554688, + 0.17480469, + -0.080566406, + -0.16699219, + -0.734375, + -1.0625, + -0.33984375, + 0.390625, + -0.18847656, + -0.5234375, + -0.48828125, + 0.44921875, + -0.09814453, + -0.3359375, + 0.087402344, + 0.36914062, + 1.3203125, + 0.25585938, + 0.14746094, + -0.059570312, + -0.15820312, + -0.037353516, + -0.61328125, + -0.6484375, + -0.35351562, + 0.55078125, + -0.26953125, + 0.90234375, + 0.3671875, + 0.31054688, + 0.00014019012, + -0.171875, + 0.025512695, + 0.5078125, + 0.11621094, + 0.33203125, + 0.8125, + -0.3046875, + -1.078125, + -0.5703125, + 0.26171875, + -0.4609375, + 0.203125, + 0.44726562, + -0.5078125, + 0.41601562, + -0.1953125, + 0.028930664, + -0.57421875, + 0.2265625, + 0.13574219, + -0.040039062, + -0.22949219, + -0.515625, + -0.19042969, + -0.30078125, + 0.10058594, + -0.66796875, + 0.6015625, + 0.296875, + -0.765625, + -0.87109375, + 0.2265625, + 0.068847656, + -0.088378906, + -0.1328125, + -0.796875, + -0.37304688, + 0.47460938, + -0.3515625, + -0.8125, + -0.32226562, + 0.265625, + 0.3203125, + -0.4140625, + -0.49023438, + 0.859375, + -0.19140625, + -0.6328125, + 0.10546875, + -0.5625, + 0.66015625, + 0.26171875, + -0.2109375, + 0.421875, + -0.82421875, + 0.29296875, + 0.17773438, + 0.24023438, + 0.5078125, + -0.49804688, + -0.10205078, + 0.10498047, + -0.36132812, + -0.47460938, + -0.20996094, + 0.010070801, + -0.546875, + 0.66796875, + -0.123046875, + -0.75390625, + 0.19628906, + 0.17480469, + 0.18261719, + -0.96875, + -0.26171875, + 0.4921875, + -0.40039062, + 0.296875, + 0.1640625, + -0.20507812, + -0.36132812, + 0.76171875, + -1.234375, + -0.625, + 0.060058594, + -0.09375, + -0.14746094, + 1.09375, + 0.057861328, + 0.22460938, + -0.703125, + 0.07470703, + 0.23828125, + -0.083984375, + -0.54296875, + 0.5546875, + -0.5, + -0.390625, + 0.106933594, + 0.6640625, + 0.27734375, + -0.953125, + 0.35351562, + -0.7734375, + -0.77734375, + 0.16503906, + -0.42382812, + 0.36914062, + 0.020141602, + -1.3515625, + 0.18847656, + 0.13476562, + -0.034179688, + -0.03930664, + -0.03857422, + -0.027954102, + 0.73828125, + -0.18945312, + -0.09814453, + -0.46289062, + 0.36914062, + 0.033203125, + 0.020874023, + -0.703125, + 0.91796875, + 0.38671875, + 0.625, + -0.19335938, + -0.16796875, + -0.58203125, + 0.21386719, + -0.032470703, + -0.296875, + -0.15625, + -0.1640625, + -0.74609375, + 0.328125, + 0.5546875, + -0.1953125, + 1.0546875, + 0.171875, + -0.099609375, + 0.5234375, + 0.05078125, + -0.35742188, + -0.2734375, + -1.3203125, + -0.8515625, + -0.16015625, + 0.01574707, + 0.29296875, + 0.18457031, + -0.265625, + 0.048339844, + 0.045654297, + -0.32226562, + 0.087890625, + -0.0047302246, + 0.38671875, + 0.10644531, + -0.06225586, + 1.03125, + 0.94140625, + -0.3203125, + 0.20800781, + -1.171875, + 0.48046875, + -0.091796875, + 0.20800781, + -0.1328125, + -0.20507812, + 0.28125, + -0.47070312, + -0.09033203, + 0.0013809204, + -0.08203125, + 0.43359375, + -0.03100586, + -0.060791016, + -0.53515625, + -1.46875, + 0.000101566315, + 0.515625, + 0.40625, + -0.10498047, + -0.15820312, + -0.009460449, + -0.77734375, + -0.5859375, + 0.9765625, + 0.099609375, + 0.51953125, + 0.38085938, + -0.09667969, + -0.100097656, + -0.5, + -1.3125, + -0.18066406, + -0.099121094, + 0.26171875, + -0.14453125, + -0.546875, + 0.17578125, + 0.484375, + 0.765625, + 0.45703125, + 0.2734375, + 0.0028076172, + 0.17089844, + -0.32421875, + -0.37695312, + 0.30664062, + -0.48046875, + 0.07128906, + 0.031982422, + -0.31054688, + -0.055419922, + -0.29296875, + 0.3359375, + -0.296875, + 0.47851562, + -0.05126953, + 0.18457031, + -0.01953125, + -0.35742188, + 0.017944336, + -0.25, + 0.10595703, + 0.17382812, + -0.73828125, + 0.36914062, + -0.15234375, + -0.8125, + 0.17382812, + 0.048095703, + 0.5625, + -0.33789062, + 0.023071289, + -0.21972656, + 0.16015625, + 0.032958984, + -1.1171875, + -0.984375, + 0.83984375, + 0.009033203, + -0.042236328, + -0.46484375, + -0.08203125, + 0.44726562, + -0.765625, + -0.3984375, + -0.40820312, + -0.234375, + 0.044189453, + 0.119628906, + -0.7578125, + -0.55078125, + -0.4453125, + 0.7578125, + 0.34960938, + 0.96484375, + 0.35742188, + 0.36914062, + -0.35351562, + -0.36132812, + 1.109375, + 0.5859375, + 0.85546875, + -0.10644531, + -0.6953125, + -0.0066833496, + 0.042236328, + -0.06689453, + 0.36914062, + 0.9765625, + -0.3046875, + 0.59765625, + -0.6640625, + 0.21484375, + -0.07128906, + 1.1328125, + -0.51953125, + 0.86328125, + -0.11328125, + 0.15722656, + -0.36328125, + -0.04638672, + 1.4375, + 0.18457031, + -0.18359375, + 0.10595703, + -0.49023438, + -0.07324219, + -0.73046875, + -0.119140625, + 0.021118164, + 0.4921875, + -0.46875, + 0.28710938, + 0.3359375, + 0.11767578, + -0.2109375, + -0.14550781, + 0.39648438, + -0.27734375, + 0.48046875, + 0.12988281, + 0.45507812, + -0.375, + -0.84765625, + 0.25585938, + -0.36523438, + 0.8046875, + 0.42382812, + -0.24511719, + 0.54296875, + 0.71875, + 0.010009766, + -0.04296875, + 0.083984375, + -0.52734375, + 0.13964844, + -0.27539062, + -0.30273438, + 1.1484375, + -0.515625, + -0.19335938, + 0.58984375, + 0.049072266, + 0.703125, + -0.04272461, + 0.5078125, + 0.34960938, + -0.3359375, + -0.47460938, + 0.049316406, + 0.36523438, + 0.7578125, + -0.022827148, + -0.71484375, + 0.21972656, + 0.09716797, + -0.203125, + -0.36914062, + 1.34375, + 0.34179688, + 0.46679688, + 1.078125, + 0.26171875, + 0.41992188, + 0.22363281, + -0.515625, + -0.5703125, + 0.13378906, + 0.26757812, + -0.22558594, + -0.5234375, + 0.06689453, + 0.08251953, + -0.625, + 0.16796875, + 0.43164062, + -0.55859375, + 0.28125, + 0.078125, + 0.6328125, + 0.23242188, + -0.064941406, + -0.004486084, + -0.20703125, + 0.2734375, + 0.453125, + -0.734375, + 0.04272461, + 0.36132812, + -0.19628906, + -0.12402344, + 1.3515625, + 0.25585938, + 0.4921875, + -0.29296875, + -0.58984375, + 0.021240234, + -0.044677734, + 0.7578125, + -0.7890625, + 0.10253906, + -0.15820312, + -0.5078125, + -0.39453125, + -0.453125, + 0.35742188, + 0.921875, + 0.44335938, + -0.49804688, + 0.44335938, + 0.31445312, + 0.58984375, + -1.0078125, + -0.22460938, + 0.24121094, + 0.87890625, + 0.66015625, + -0.390625, + -0.05053711, + 0.059570312, + 0.36132812, + -0.00038719177, + -0.017089844, + 0.62890625, + 0.203125, + 0.17480469, + 0.025512695, + 0.47460938, + 0.3125, + 1.140625, + 0.32421875, + -0.057861328, + 0.36914062, + -0.7265625, + -0.51953125, + 0.26953125, + 0.42773438, + 0.064453125, + 0.6328125, + 0.27148438, + -0.11767578, + 0.66796875, + -0.38671875, + 0.5234375, + -0.59375, + 0.5078125, + 0.008239746, + -0.34179688, + -0.27539062, + 0.5234375, + 1.296875, + 0.29492188, + -0.010986328, + -0.41210938, + 0.59375, + 0.061767578, + -0.33398438, + -2.03125, + 0.87890625, + -0.010620117, + 0.53125, + 0.14257812, + -0.515625, + -1.03125, + 0.578125, + 0.1875, + 0.44335938, + -0.33203125, + -0.36328125, + -0.3203125, + 0.29296875, + -0.8203125, + 0.41015625, + -0.48242188, + 0.66015625, + 0.5625, + -0.16503906, + -0.54296875, + -0.38085938, + 0.26171875, + 0.62109375, + 0.29101562, + -0.31054688, + 0.23730469, + -0.8515625, + 0.5234375, + 0.15332031, + 0.52734375, + -0.079589844, + -0.080566406, + -0.15527344, + -0.022827148, + 0.030517578, + -0.1640625, + -0.421875, + 0.09716797, + 0.03930664, + -0.055908203, + -0.546875, + -0.47851562, + 0.091796875, + 0.32226562, + -0.94140625, + -0.04638672, + -1.203125, + -0.39648438, + 0.45507812, + 0.296875, + -0.45703125, + 0.37890625, + -0.122558594, + 0.28320312, + -0.01965332, + -0.11669922, + -0.34570312, + -0.53515625, + -0.091308594, + -0.9375, + -0.32617188, + 0.095214844, + -0.4765625, + 0.37890625, + -0.859375, + 1.1015625, + -0.08935547, + 0.46484375, + -0.19238281, + 0.7109375, + 0.040039062, + -0.5390625, + 0.22363281, + -0.70703125, + 0.4921875, + -0.119140625, + -0.26757812, + -0.08496094, + 0.0859375, + -0.00390625, + -0.013366699, + -0.03955078, + 0.07421875, + -0.13085938, + 0.29101562, + -0.12109375, + 0.45703125, + 0.021728516, + 0.38671875, + -0.3671875, + -0.52734375, + -0.115722656, + 0.125, + 0.5703125, + -1.234375, + 0.06298828, + -0.55859375, + 0.60546875, + 0.8125, + -0.0032958984, + -0.068359375, + -0.21191406, + 0.56640625, + 0.17285156, + -0.3515625, + 0.36328125, + -0.99609375, + 0.43554688, + -0.1015625, + 0.07080078, + -0.66796875, + 1.359375, + 0.41601562, + 0.15917969, + 0.17773438, + -0.28710938, + 0.021850586, + -0.46289062, + 0.17578125, + -0.03955078, + -0.026855469, + 0.5078125, + -0.65625, + 0.0012512207, + 0.044433594, + -0.18652344, + 0.4921875, + -0.75390625, + 0.0072021484, + 0.4375, + -0.31445312, + 0.20214844, + 0.15039062, + -0.63671875, + -0.296875, + -0.375, + -0.027709961, + 0.013427734, + 0.17089844, + 0.89453125, + 0.11621094, + -0.43945312, + -0.30859375, + 0.02709961, + 0.23242188, + -0.64453125, + -0.859375, + 0.22167969, + -0.023071289, + -0.052734375, + 0.3671875, + -0.18359375, + 0.81640625, + -0.11816406, + 0.028320312, + 0.19042969, + 0.012817383, + -0.43164062, + 0.55859375, + -0.27929688, + 0.14257812, + -0.140625, + -0.048583984, + -0.014526367, + 0.35742188, + 0.22753906, + 0.13183594, + 0.04638672, + 0.03930664, + -0.29296875, + -0.2109375, + -0.16308594, + -0.48046875, + -0.13378906, + -0.39257812, + 0.29296875, + -0.047851562, + -0.5546875, + 0.08300781, + -0.14941406, + -0.07080078, + 0.12451172, + 0.1953125, + -0.51171875, + -0.048095703, + 0.1953125, + -0.37695312, + 0.46875, + -0.084472656, + 0.19042969, + -0.39453125, + 0.69921875, + -0.0065307617, + 0.25390625, + -0.16992188, + -0.5078125, + 0.016845703, + 0.27929688, + -0.22070312, + 0.671875, + 0.18652344, + 0.25, + -0.046875, + -0.012023926, + -0.36523438, + 0.36523438, + -0.11279297, + 0.421875, + 0.079589844, + -0.100097656, + 0.37304688, + 0.29882812, + -0.10546875, + -0.36523438, + 0.040039062, + 0.546875, + 0.12890625, + -0.06542969, + -0.38085938, + -0.35742188, + -0.6484375, + -0.28515625, + 0.0107421875, + -0.055664062, + 0.45703125, + 0.33984375, + 0.26367188, + -0.23144531, + 0.012878418, + -0.875, + 0.11035156, + 0.33984375, + 0.203125, + 0.38867188, + 0.24902344, + -0.37304688, + -0.98046875, + -0.122558594, + -0.17871094, + -0.09277344, + 0.1796875, + 0.4453125, + -0.66796875, + 0.78515625, + 0.12988281, + 0.35546875, + 0.44140625, + 0.58984375, + 0.29492188, + 0.7734375, + -0.21972656, + -0.40234375, + -0.22265625, + 0.18359375, + 0.54296875, + 0.17382812, + 0.59375, + -0.390625, + -0.92578125, + -0.017456055, + -0.25, + 0.73828125, + 0.7578125, + -0.3828125, + -0.25976562, + 0.049072266, + 0.046875, + -0.3515625, + 0.30078125, + -1.03125, + -0.48828125, + 0.0017929077, + -0.26171875, + 0.20214844, + 0.29882812, + 0.064941406, + 0.21484375, + -0.55078125, + -0.021362305, + 0.12988281, + 0.27148438, + 0.38867188, + -0.19726562, + -0.55078125, + 0.1640625, + 0.32226562, + -0.72265625, + 0.36132812, + 1.21875, + -0.22070312, + -0.32421875, + -0.29882812, + 0.0024414062, + 0.19921875, + 0.734375, + 0.16210938, + 0.17871094, + -0.19140625, + 0.38476562, + -0.06591797, + -0.47070312, + -0.040039062, + -0.33007812, + -0.07910156, + -0.2890625, + 0.00970459, + 0.12695312, + -0.12060547, + -0.18847656, + 1.015625, + -0.032958984, + 0.12451172, + -0.38476562, + 0.063964844, + 1.0859375, + 0.067871094, + -0.24511719, + 0.125, + 0.10546875, + -0.22460938, + -0.29101562, + 0.24414062, + -0.017944336, + -0.15625, + -0.60546875, + -0.25195312, + -0.46875, + 0.80859375, + -0.34960938, + 0.42382812, + 0.796875, + 0.296875, + -0.067871094, + 0.39453125, + 0.07470703, + 0.033935547, + 0.24414062, + 0.32617188, + 0.023925781, + 0.73046875, + 0.2109375, + -0.43164062, + 0.14453125, + 0.63671875, + 0.21972656, + -0.1875, + -0.18066406, + -0.22167969, + -1.3359375, + 0.52734375, + -0.40625, + -0.12988281, + 0.17480469, + -0.18066406, + 0.58984375, + -0.32421875, + -0.13476562, + 0.39257812, + -0.19238281, + 0.068359375, + 0.7265625, + -0.7109375, + -0.125, + 0.328125, + 0.34179688, + -0.48828125, + -0.10058594, + -0.83984375, + 0.30273438, + 0.008239746, + -1.390625, + 0.171875, + 0.34960938, + 0.44921875, + 0.22167969, + 0.60546875, + -0.36914062, + -0.028808594, + -0.19921875, + 0.6875, + 0.52734375, + -0.07421875, + 0.35546875, + 0.546875, + 0.08691406, + 0.23339844, + -0.984375, + -0.20507812, + 0.08544922, + 0.453125, + -0.07421875, + -0.953125, + 0.74609375, + -0.796875, + 0.47851562, + 0.81640625, + -0.44921875, + -0.33398438, + -0.54296875, + 0.46484375, + -0.390625, + -0.24121094, + -0.0115356445, + 1.1328125, + 1.0390625, + 0.6484375, + 0.35742188, + -0.29492188, + -0.0007095337, + -0.060302734, + 0.21777344, + 0.15136719, + -0.6171875, + 0.11328125, + -0.025878906, + 0.19238281, + 0.140625, + 0.171875, + 0.25195312, + 0.10546875, + 0.0008354187, + -0.13476562, + -0.26953125, + 0.025024414, + -0.28320312, + -0.107910156, + 1.015625, + 0.05493164, + -0.12988281, + 0.30859375, + 0.22558594, + -0.60546875, + 0.11328125, + -1.203125, + 0.6484375, + 0.087402344, + 0.32226562, + 0.63671875, + -0.07714844, + -1.390625, + -0.71875, + -0.34179688, + -0.10546875, + -0.37304688, + -0.09863281, + -0.41210938, + -0.14941406, + 0.41210938, + -0.20898438, + 0.18261719, + 0.67578125, + 0.41601562, + 0.32617188, + 0.2421875, + -0.14257812, + -0.6796875, + 0.01953125, + 0.34179688, + 0.20800781, + -0.123046875, + 0.087402344, + 0.85546875, + 0.33984375, + 0.33203125, + -0.68359375, + 0.44921875, + 0.50390625, + 0.083496094, + 0.10888672, + -0.09863281, + 0.55078125, + 0.09765625, + -0.50390625, + 0.13378906, + -0.29882812, + 0.030761719, + -0.64453125, + 0.22949219, + 0.43945312, + 0.16503906, + 0.10888672, + -0.12792969, + -0.039794922, + -0.111328125, + -0.35742188, + 0.053222656, + -0.78125, + -0.4375, + 0.359375, + -0.88671875, + -0.21972656, + -0.053710938, + 0.91796875, + -0.10644531, + 0.55859375, + -0.7734375, + 0.5078125, + 0.46484375, + 0.32226562, + 0.16796875, + -0.28515625, + 0.045410156, + -0.45117188, + 0.38867188, + -0.33398438, + -0.5234375, + 0.296875, + 0.6015625, + 0.3515625, + -0.734375, + 0.3984375, + -0.08251953, + 0.359375, + -0.28515625, + -0.88671875, + 0.0051879883, + 0.045166016, + -0.7421875, + -0.36523438, + 0.140625, + 0.18066406, + -0.171875, + -0.15625, + -0.53515625, + 0.2421875, + -0.19140625, + -0.18066406, + 0.25390625, + 0.6875, + -0.01965332, + -0.33203125, + 0.29492188, + 0.107421875, + -0.048339844, + -0.82421875, + 0.52734375, + 0.78125, + 0.8203125, + -0.90625, + 0.765625, + 0.0390625, + 0.045410156, + 0.26367188, + -0.14355469, + -0.26367188, + 0.390625, + -0.10888672, + 0.33007812, + -0.5625, + 0.08105469, + -0.13769531, + 0.8515625, + -0.14453125, + 0.77734375, + -0.48046875, + -0.3515625, + -0.25390625, + -0.09277344, + 0.23925781, + -0.022338867, + -0.45898438, + 0.36132812, + -0.23828125, + 0.265625, + -0.48632812, + -0.46875, + -0.75390625, + 1.3125, + 0.78125, + -0.63671875, + -1.21875, + 0.5078125, + -0.27734375, + -0.118652344, + 0.041992188, + -0.14648438, + -0.8046875, + 0.21679688, + -0.79296875, + 0.28320312, + -0.09667969, + 0.42773438, + 0.49414062, + 0.44726562, + 0.21972656, + -0.02746582, + -0.03540039, + -0.14941406, + -0.515625, + -0.27929688, + 0.9609375, + -0.007598877, + 0.34765625, + -0.060546875, + -0.44726562, + 0.7421875, + 0.15332031, + 0.45117188, + -0.4921875, + 0.07080078, + 0.5625, + 0.3984375, + -0.20019531, + 0.014892578, + 0.63671875, + -0.0071411133, + 0.016357422, + 1.0625, + 0.049316406, + 0.18066406, + 0.09814453, + -0.52734375, + -0.359375, + -0.072265625, + -0.41992188, + 0.39648438, + 0.38671875, + -0.30273438, + -0.056640625, + -0.640625, + -0.44921875, + 0.49414062, + 0.29101562, + 0.49609375, + 0.40429688, + -0.10205078, + 0.49414062, + -0.28125, + -0.12695312, + -0.0022735596, + -0.37304688, + 0.122558594, + 0.07519531, + -0.12597656, + -0.38085938, + -0.19824219, + -0.40039062, + 0.56640625, + -1.140625, + -0.515625, + -0.17578125, + -0.765625, + -0.43945312, + 0.3359375, + -0.24707031, + 0.32617188, + -0.45117188, + -0.37109375, + 0.45117188, + -0.27539062, + -0.38867188, + 0.09082031, + 0.17675781, + 0.49414062, + 0.19921875, + 0.17480469, + 0.8515625, + -0.23046875, + -0.234375, + -0.28515625, + 0.10253906, + 0.29101562, + -0.3359375, + -0.203125, + 0.6484375, + 0.11767578, + -0.20214844, + -0.42382812, + 0.26367188, + 0.6328125, + 0.0059509277, + 0.08691406, + -1.5625, + -0.43554688, + 0.17675781, + 0.091796875, + -0.5234375, + -0.09863281, + 0.20605469, + 0.16601562, + -0.578125, + 0.017700195, + 0.41015625, + 1.03125, + -0.55078125, + 0.21289062, + -0.35351562, + 0.24316406, + -0.123535156, + 0.11035156, + -0.48242188, + -0.34179688, + 0.45117188, + 0.3125, + -0.071777344, + 0.12792969, + 0.55859375, + 0.063964844, + -0.21191406, + 0.01965332, + -1.359375, + -0.21582031, + -0.019042969, + 0.16308594, + -0.3671875, + -0.40625, + -1.0234375, + -0.21289062, + 0.24023438, + -0.28125, + 0.26953125, + -0.14550781, + -0.087890625, + 0.16113281, + -0.49804688, + -0.17675781, + -0.890625, + 0.27929688, + 0.484375, + 0.27148438, + 0.11816406, + 0.83984375, + 0.029052734, + -0.890625, + 0.66796875, + 0.78515625, + -0.953125, + 0.49414062, + -0.546875, + 0.106933594, + -0.08251953, + 0.2890625, + -0.1484375, + -0.85546875, + 0.32421875, + -0.0040893555, + -0.16601562, + -0.16699219, + 0.24414062, + -0.5078125, + 0.25390625, + -0.10253906, + 0.15625, + 0.140625, + -0.27539062, + -0.546875, + -0.5546875, + -0.71875, + 0.37304688, + 0.060058594, + -0.076171875, + 0.44921875, + 0.06933594, + -0.28710938, + -0.22949219, + 0.17578125, + 0.09814453, + 0.4765625, + -0.95703125, + -0.03540039, + 0.21289062, + -0.7578125, + -0.07373047, + 0.10546875, + 0.07128906, + 0.76171875, + 0.4296875, + -0.09375, + 0.27539062, + -0.55078125, + 0.29882812, + -0.42382812, + 0.32617188, + -0.39648438, + 0.12451172, + 0.16503906, + -0.22460938, + -0.65625, + -0.022094727, + 0.61328125, + -0.024780273, + 0.62109375, + -0.033447266, + 0.515625, + 0.12890625, + -0.21875, + -0.08642578, + 0.49804688, + -0.2265625, + -0.29296875, + 0.19238281, + 0.3515625, + -1.265625, + 0.57421875, + 0.20117188, + -0.28320312, + 0.1953125, + -0.30664062, + 0.2265625, + -0.11230469, + 0.83984375, + 0.111328125, + 0.265625, + 0.71484375, + -0.625, + 0.38867188, + 0.47070312, + -0.32617188, + -0.171875, + 1.0078125, + 0.19726562, + -0.118652344, + 0.63671875, + -0.068359375, + -0.25585938, + 0.4140625, + -0.29296875, + 0.21386719, + -0.064453125, + 0.15820312, + -0.89453125, + -0.16308594, + 0.48046875, + 0.14648438, + -0.5703125, + 0.84765625, + -0.19042969, + 0.03515625, + 0.42578125, + -0.27539062, + -0.5390625, + 0.95703125, + 0.2734375, + 0.16699219, + -0.328125, + 0.11279297, + 0.003250122, + 0.47265625, + -0.31640625, + 0.546875, + 0.55859375, + 0.06933594, + -0.61328125, + -0.16210938, + -0.375, + 0.100097656, + -0.088378906, + 0.12695312, + 0.079589844, + 0.123535156, + -1.0078125, + 0.6875, + 0.022949219, + -0.40039062, + -0.09863281, + 0.29101562, + -1.2890625, + -0.20996094, + 0.36328125, + -0.3515625, + 0.7890625, + 0.12207031, + 0.48046875, + -0.13671875, + -0.041015625, + 0.19824219, + 0.19921875, + 0.01171875, + -0.37695312, + -0.62890625, + 0.9375, + -0.671875, + 0.24609375, + 0.6484375, + -0.29101562, + 0.076171875, + 0.62109375, + -0.5546875, + 0.36523438, + 0.75390625, + -0.19140625, + -0.875, + -0.8203125, + -0.24414062, + -0.625, + 0.1796875, + -0.40039062, + 0.25390625, + -0.14550781, + -0.21679688, + -0.828125, + 0.3359375, + 0.43554688, + 0.55078125, + -0.44921875, + -0.28710938, + 0.24023438, + 0.18066406, + -0.6953125, + 0.020385742, + -0.11376953, + 0.13867188, + -0.92578125, + 0.33398438, + -0.328125, + 0.78125, + -0.45507812, + -0.07470703, + 0.34179688, + 0.07080078, + 0.76171875, + 0.37890625, + -0.10644531, + 0.90234375, + -0.21875, + -0.15917969, + -0.36132812, + 0.2109375, + -0.45703125, + -0.76953125, + 0.21289062, + 0.26367188, + 0.49804688, + 0.35742188, + -0.20019531, + 0.31054688, + 0.34179688, + 0.17089844, + -0.15429688, + 0.39648438, + -0.5859375, + 0.20996094, + -0.40039062, + 0.5703125, + -0.515625, + 0.5234375, + 0.049560547, + 0.328125, + 0.24804688, + 0.42578125, + 0.609375, + 0.19238281, + 0.27929688, + 0.19335938, + 0.78125, + -0.9921875, + 0.23925781, + -1.3828125, + -0.22949219, + -0.578125, + -0.13964844, + -0.17382812, + -0.011169434, + 0.26171875, + -0.73046875, + -1.4375, + 0.6953125, + -0.7421875, + 0.052246094, + 0.12207031, + 1.3046875, + 0.38867188, + 0.040283203, + -0.546875, + -0.0021514893, + 0.18457031, + -0.5546875, + -0.51171875, + -0.16308594, + -0.104003906, + -0.38867188, + -0.20996094, + -0.8984375, + 0.6015625, + -0.30078125, + -0.13769531, + 0.16113281, + 0.58203125, + -0.23730469, + -0.125, + -1.0234375, + 0.875, + -0.7109375, + 0.29101562, + 0.09667969, + -0.3203125, + -0.48046875, + 0.37890625, + 0.734375, + -0.28710938, + -0.29882812, + -0.05493164, + 0.34765625, + -0.84375, + 0.65625, + 0.578125, + -0.20019531, + 0.13769531, + 0.10058594, + -0.37109375, + 0.36523438, + -0.22167969, + 0.72265625, + ], + "inputTextTokenCount": 6, + }, + ], + "amazon.titan-text-express-v1::What is 212 degrees Fahrenheit converted to Celsius?": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "03524118-8d77-430f-9e08-63b5c03a40cf"}, + 200, + { + "inputTextTokenCount": 12, + "results": [ + { + "tokenCount": 75, + "outputText": "\nUse the formula,\n°C = (°F - 32) x 5/9\n= 212 x 5/9\n= 100 degrees Celsius\n212 degrees Fahrenheit is 100 degrees Celsius.", + "completionReason": "FINISH", + } + ], + }, + ], + "anthropic.claude-instant-v1::Human: What is 212 degrees Fahrenheit converted to Celsius? Assistant:": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18"}, + 200, + { + "completion": " Okay, here are the conversion steps:\n212 degrees Fahrenheit\n- Subtract 32 from 212 to get 180 (to convert from Fahrenheit to Celsius scale)\n- Multiply by 5/9 (because the formula is °C = (°F - 32) × 5/9)\n- 180 × 5/9 = 100\n\nSo 212 degrees Fahrenheit converted to Celsius is 100 degrees Celsius.", + "stop_reason": "stop_sequence", + "stop": "\n\nHuman:", + }, + ], + "cohere.command-text-v14::What is 212 degrees Fahrenheit converted to Celsius?": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "e77422c8-fbbf-4e17-afeb-c758425c9f97"}, + 200, + { + "generations": [ + { + "finish_reason": "MAX_TOKENS", + "id": "d20c06b0-aafe-4230-b2c7-200f4069355e", + "text": " 212°F is equivalent to 100°C. \n\nFahrenheit and Celsius are two temperature scales commonly used in everyday life. The Fahrenheit scale is based on 32°F for the freezing point of water and 212°F for the boiling point of water. On the other hand, the Celsius scale uses 0°C and 100°C as the freezing and boiling points of water, respectively. \n\nTo convert from Fahrenheit to Celsius, we subtract 32 from the Fahrenheit temperature and multiply the result", + } + ], + "id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "prompt": "What is 212 degrees Fahrenheit converted to Celsius?", + }, + ], + "does-not-exist::": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "f4908827-3db9-4742-9103-2bbc34578b03", + "x-amzn-ErrorType": "ValidationException:http://internal.amazon.com/coral/com.amazon.bedrock/", + }, + 400, + {"message": "The provided model identifier is invalid."}, + ], + "ai21.j2-mid-v1::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "9021791d-3797-493d-9277-e33aa6f6d544", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], + "amazon.titan-embed-g1-text-02::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "73328313-506e-4da8-af0f-51017fa6ca3f", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], + "amazon.titan-embed-text-v1::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "aece6ad7-e2ff-443b-a953-ba7d385fd0cc", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], + "amazon.titan-text-express-v1::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "15b39c8b-8e85-42c9-9623-06720301bda3", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], + "anthropic.claude-instant-v1::Human: Invalid Token Assistant:": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "37396f55-b721-4bae-9461-4c369f5a080d", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], + "cohere.command-text-v14::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "22476490-a0d6-42db-b5ea-32d0b8a7f751", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], +} + +MODEL_PATH_RE = re.compile(r"/model/([^/]+)/invoke") + + +def simple_get(self): + content_len = int(self.headers.get("content-length")) + content = json.loads(self.rfile.read(content_len).decode("utf-8")) + + model = MODEL_PATH_RE.match(self.path).group(1) + prompt = extract_shortened_prompt(content, model) + if not prompt: + self.send_response(500) + self.end_headers() + self.wfile.write("Could not parse prompt.".encode("utf-8")) + return + + headers, response = ({}, "") + for k, v in RESPONSES.items(): + if prompt.startswith(k): + headers, status_code, response = v + break + else: # If no matches found + self.send_response(500) + self.end_headers() + self.wfile.write(("Unknown Prompt:\n%s" % prompt).encode("utf-8")) + return + + # Send response code + self.send_response(status_code) + + # Send headers + for k, v in headers.items(): + self.send_header(k, v) + self.end_headers() + + # Send response body + self.wfile.write(json.dumps(response).encode("utf-8")) + return + + +def extract_shortened_prompt(content, model): + prompt = content.get("inputText", "") or content.get("prompt", "") + prompt = "::".join((model, prompt)) # Prepend model name to prompt key to keep separate copies + return prompt.lstrip().split("\n")[0] + + +class MockExternalBedrockServer(MockExternalHTTPServer): + # To use this class in a test one needs to start and stop this server + # before and after making requests to the test app that makes the external + # calls. + + def __init__(self, handler=simple_get, port=None, *args, **kwargs): + super(MockExternalBedrockServer, self).__init__(handler=handler, port=port, *args, **kwargs) + + +if __name__ == "__main__": + # Use this to sort dict for easier future incremental updates + print("RESPONSES = %s" % dict(sorted(RESPONSES.items(), key=lambda i: (i[1][1], i[0])))) + + with MockExternalBedrockServer() as server: + print("MockExternalBedrockServer serving on port %s" % str(server.port)) + while True: + pass # Serve forever diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py new file mode 100644 index 0000000000..9abdca83cf --- /dev/null +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -0,0 +1,317 @@ +chat_completion_payload_templates = { + "amazon.titan-text-express-v1": '{ "inputText": "%s", "textGenerationConfig": {"temperature": %f, "maxTokenCount": %d }}', + "ai21.j2-mid-v1": '{"prompt": "%s", "temperature": %f, "maxTokens": %d}', + "anthropic.claude-instant-v1": '{"prompt": "Human: %s Assistant:", "temperature": %f, "max_tokens_to_sample": %d}', + "cohere.command-text-v14": '{"prompt": "%s", "temperature": %f, "max_tokens": %d}', +} + +chat_completion_expected_events = { + "amazon.titan-text-express-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "amazon.titan-text-express-v1", + "response.model": "amazon.titan-text-express-v1", + "response.usage.completion_tokens": 75, + "response.usage.total_tokens": 87, + "response.usage.prompt_tokens": 12, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "FINISH", + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "amazon.titan-text-express-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "\nUse the formula,\n°C = (°F - 32) x 5/9\n= 212 x 5/9\n= 100 degrees Celsius\n212 degrees Fahrenheit is 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 1, + "response.model": "amazon.titan-text-express-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "ai21.j2-mid-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", + "response_id": "1234", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "ai21.j2-mid-v1", + "response.model": "ai21.j2-mid-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "endoftext", + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "1234-0", + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "ai21.j2-mid-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "1234-1", + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "\n212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 1, + "response.model": "ai21.j2-mid-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "anthropic.claude-instant-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "anthropic.claude-instant-v1", + "response.model": "anthropic.claude-instant-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop_sequence", + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "Human: What is 212 degrees Fahrenheit converted to Celsius? Assistant:", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "anthropic.claude-instant-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": " Okay, here are the conversion steps:\n212 degrees Fahrenheit\n- Subtract 32 from 212 to get 180 (to convert from Fahrenheit to Celsius scale)\n- Multiply by 5/9 (because the formula is °C = (°F - 32) × 5/9)\n- 180 × 5/9 = 100\n\nSo 212 degrees Fahrenheit converted to Celsius is 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 1, + "response.model": "anthropic.claude-instant-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "cohere.command-text-v14": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "response_id": None, # UUID that varies with each run + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "cohere.command-text-v14", + "response.model": "cohere.command-text-v14", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "MAX_TOKENS", + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "cohere.command-text-v14", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": " 212°F is equivalent to 100°C. \n\nFahrenheit and Celsius are two temperature scales commonly used in everyday life. The Fahrenheit scale is based on 32°F for the freezing point of water and 212°F for the boiling point of water. On the other hand, the Celsius scale uses 0°C and 100°C as the freezing and boiling points of water, respectively. \n\nTo convert from Fahrenheit to Celsius, we subtract 32 from the Fahrenheit temperature and multiply the result", + "role": "assistant", + "completion_id": None, + "sequence": 1, + "response.model": "cohere.command-text-v14", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], +} + +chat_completion_expected_client_errors = { + "amazon.titan-text-express-v1": { + "conversation_id": "my-awesome-id", + "request_id": "15b39c8b-8e85-42c9-9623-06720301bda3", + "api_key_last_four_digits": "-KEY", + "request.model": "amazon.titan-text-express-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, + "ai21.j2-mid-v1": { + "conversation_id": "my-awesome-id", + "request_id": "9021791d-3797-493d-9277-e33aa6f6d544", + "api_key_last_four_digits": "-KEY", + "request.model": "ai21.j2-mid-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, + "anthropic.claude-instant-v1": { + "conversation_id": "my-awesome-id", + "request_id": "37396f55-b721-4bae-9461-4c369f5a080d", + "api_key_last_four_digits": "-KEY", + "request.model": "anthropic.claude-instant-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, + "cohere.command-text-v14": { + "conversation_id": "my-awesome-id", + "request_id": "22476490-a0d6-42db-b5ea-32d0b8a7f751", + "api_key_last_four_digits": "-KEY", + "request.model": "cohere.command-text-v14", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, +} diff --git a/tests/external_botocore/_test_bedrock_embeddings.py b/tests/external_botocore/_test_bedrock_embeddings.py new file mode 100644 index 0000000000..8fb2ceecee --- /dev/null +++ b/tests/external_botocore/_test_bedrock_embeddings.py @@ -0,0 +1,74 @@ +embedding_payload_templates = { + "amazon.titan-embed-text-v1": '{ "inputText": "%s" }', + "amazon.titan-embed-g1-text-02": '{ "inputText": "%s" }', +} + +embedding_expected_events = { + "amazon.titan-embed-text-v1": [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "input": "This is an embedding test.", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "response.model": "amazon.titan-embed-text-v1", + "request.model": "amazon.titan-embed-text-v1", + "request_id": "11233989-07e8-4ecb-9ba6-79601ba6d8cc", + "response.usage.total_tokens": 6, + "response.usage.prompt_tokens": 6, + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "amazon.titan-embed-g1-text-02": [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "input": "This is an embedding test.", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "response.model": "amazon.titan-embed-g1-text-02", + "request.model": "amazon.titan-embed-g1-text-02", + "request_id": "b10ac895-eae3-4f07-b926-10b2866c55ed", + "response.usage.total_tokens": 6, + "response.usage.prompt_tokens": 6, + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ] +} + +embedding_expected_client_errors = { + "amazon.titan-embed-text-v1": { + "request_id": "aece6ad7-e2ff-443b-a953-ba7d385fd0cc", + "api_key_last_four_digits": "-KEY", + "request.model": "amazon.titan-embed-text-v1", + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, + "amazon.titan-embed-g1-text-02": { + "request_id": "73328313-506e-4da8-af0f-51017fa6ca3f", + "api_key_last_four_digits": "-KEY", + "request.model": "amazon.titan-embed-g1-text-02", + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, +} diff --git a/tests/external_botocore/conftest.py b/tests/external_botocore/conftest.py index e5cf155336..6dbf20ef42 100644 --- a/tests/external_botocore/conftest.py +++ b/tests/external_botocore/conftest.py @@ -12,19 +12,152 @@ # See the License for the specific language governing permissions and # limitations under the License. +import json +import os +import re + import pytest +from _mock_external_bedrock_server import ( + MockExternalBedrockServer, + extract_shortened_prompt, +) +from testing_support.fixtures import ( # noqa: F401, pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) -from testing_support.fixtures import collector_agent_registration_fixture, collector_available_fixture # noqa: F401; pylint: disable=W0611 +from newrelic.api.time_trace import current_trace +from newrelic.api.transaction import current_transaction +from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import ( + get_package_version, + get_package_version_tuple, +) +BOTOCORE_VERSION = get_package_version("botocore") _default_settings = { - 'transaction_tracer.explain_threshold': 0.0, - 'transaction_tracer.transaction_threshold': 0.0, - 'transaction_tracer.stack_trace_threshold': 0.0, - 'debug.log_data_collector_payloads': True, - 'debug.record_transaction_failure': True, + "transaction_tracer.explain_threshold": 0.0, + "transaction_tracer.transaction_threshold": 0.0, + "transaction_tracer.stack_trace_threshold": 0.0, + "debug.log_data_collector_payloads": True, + "debug.record_transaction_failure": True, + "ml_insights_events.enabled": True, } - collector_agent_registration = collector_agent_registration_fixture( - app_name='Python Agent Test (external_botocore)', - default_settings=_default_settings) + app_name="Python Agent Test (external_botocore)", + default_settings=_default_settings, + linked_applications=["Python Agent Test (external_botocore)"], +) + + +# Bedrock Fixtures + +BEDROCK_AUDIT_LOG_FILE = os.path.join(os.path.realpath(os.path.dirname(__file__)), "bedrock_audit.log") +BEDROCK_AUDIT_LOG_CONTENTS = {} + + +@pytest.fixture(scope="session") +def bedrock_server(): + """ + This fixture will either create a mocked backend for testing purposes, or will + set up an audit log file to log responses of the real Bedrock backend to a file. + The behavior can be controlled by setting NEW_RELIC_TESTING_RECORD_BEDROCK_RESPONSES=1 as + an environment variable to run using the real Bedrock backend. (Default: mocking) + """ + import boto3 + + from newrelic.core.config import _environ_as_bool + + if get_package_version_tuple("botocore") < (1, 31, 57): + pytest.skip(reason="Bedrock Runtime not available.") + + if not _environ_as_bool("NEW_RELIC_TESTING_RECORD_BEDROCK_RESPONSES", False): + # Use mocked Bedrock backend and prerecorded responses + with MockExternalBedrockServer() as server: + client = boto3.client( # nosec + "bedrock-runtime", + "us-east-1", + endpoint_url="http://localhost:%d" % server.port, + aws_access_key_id="NOT-A-REAL-SECRET", + aws_secret_access_key="NOT-A-REAL-SECRET", + ) + + yield client + else: + # Use real Bedrock backend and record responses + assert ( + os.environ["AWS_ACCESS_KEY_ID"] and os.environ["AWS_SECRET_ACCESS_KEY"] + ), "AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required." + + # Construct real client + client = boto3.client( + "bedrock-runtime", + "us-east-1", + ) + + # Apply function wrappers to record data + wrap_function_wrapper( + "botocore.endpoint", "Endpoint._do_get_response", wrap_botocore_endpoint_Endpoint__do_get_response + ) + yield client # Run tests + + # Write responses to audit log + bedrock_audit_log_contents = dict(sorted(BEDROCK_AUDIT_LOG_CONTENTS.items(), key=lambda i: (i[1][1], i[0]))) + with open(BEDROCK_AUDIT_LOG_FILE, "w") as audit_log_fp: + json.dump(bedrock_audit_log_contents, fp=audit_log_fp, indent=4) + + +# Intercept outgoing requests and log to file for mocking +RECORDED_HEADERS = set(["x-amzn-requestid", "x-amzn-errortype", "content-type"]) + + +def wrap_botocore_endpoint_Endpoint__do_get_response(wrapped, instance, args, kwargs): + request = bind__do_get_response(*args, **kwargs) + if not request: + return wrapped(*args, **kwargs) + + body = json.loads(request.body) + + match = re.search(r"/model/([0-9a-zA-Z.-]+)/", request.url) + model = match.group(1) + prompt = extract_shortened_prompt(body, model) + + # Send request + result = wrapped(*args, **kwargs) + + # Unpack response + success, exception = result + response = (success or exception)[0] + + # Clean up data + data = json.loads(response.content.decode("utf-8")) + headers = dict(response.headers.items()) + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS or k[0].startswith("x-ratelimit"), + headers.items(), + ) + ) + status_code = response.status_code + + # Log response + BEDROCK_AUDIT_LOG_CONTENTS[prompt] = headers, status_code, data # Append response data to audit log + return result + + +def bind__do_get_response(request, operation_model, context): + return request + + +@pytest.fixture(scope="session") +def set_trace_info(): + def _set_trace_info(): + txn = current_transaction() + if txn: + txn._trace_id = "trace-id" + trace = current_trace() + if trace: + trace.guid = "span-id" + + return _set_trace_info diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py new file mode 100644 index 0000000000..4f32a92ac6 --- /dev/null +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -0,0 +1,233 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import copy +import json +from io import BytesIO + +import botocore.exceptions +import pytest +from _test_bedrock_chat_completion import ( + chat_completion_expected_client_errors, + chat_completion_expected_events, + chat_completion_payload_templates, +) +from conftest import BOTOCORE_VERSION +from testing_support.fixtures import ( + dt_enabled, + override_application_settings, + reset_core_stats_engine, +) +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_ml_event_count import validate_ml_event_count +from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.api.background_task import background_task +from newrelic.api.transaction import add_custom_attribute +from newrelic.common.object_names import callable_name + + +@pytest.fixture(scope="session", params=[False, True], ids=["Bytes", "Stream"]) +def is_file_payload(request): + return request.param + + +@pytest.fixture( + scope="module", + params=[ + "amazon.titan-text-express-v1", + "ai21.j2-mid-v1", + "anthropic.claude-instant-v1", + "cohere.command-text-v14", + ], +) +def model_id(request): + return request.param + + +@pytest.fixture(scope="module") +def exercise_model(bedrock_server, model_id, is_file_payload): + payload_template = chat_completion_payload_templates[model_id] + + def _exercise_model(prompt, temperature=0.7, max_tokens=100): + body = (payload_template % (prompt, temperature, max_tokens)).encode("utf-8") + if is_file_payload: + body = BytesIO(body) + + response = bedrock_server.invoke_model( + body=body, + modelId=model_id, + accept="application/json", + contentType="application/json", + ) + response_body = json.loads(response.get("body").read()) + assert response_body + + return _exercise_model + + +@pytest.fixture(scope="module") +def expected_events(model_id): + return chat_completion_expected_events[model_id] + + +@pytest.fixture(scope="module") +def expected_events_no_convo_id(model_id): + events = copy.deepcopy(chat_completion_expected_events[model_id]) + for event in events: + event[1]["conversation_id"] = "" + return events + + +@pytest.fixture(scope="module") +def expected_client_error(model_id): + return chat_completion_expected_client_errors[model_id] + + +_test_bedrock_chat_completion_prompt = "What is 212 degrees Fahrenheit converted to Celsius?" + + +# not working with claude +@reset_core_stats_engine() +def test_bedrock_chat_completion_in_txn_with_convo_id(set_trace_info, exercise_model, expected_events): + @validate_ml_events(expected_events) + # One summary event, one user message, and one response message from the assistant + @validate_ml_event_count(count=3) + @validate_transaction_metrics( + name="test_bedrock_chat_completion_in_txn_with_convo_id", + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, + ) + @background_task(name="test_bedrock_chat_completion_in_txn_with_convo_id") + def _test(): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) + + _test() + + +# not working with claude +@reset_core_stats_engine() +def test_bedrock_chat_completion_in_txn_no_convo_id(set_trace_info, exercise_model, expected_events_no_convo_id): + @validate_ml_events(expected_events_no_convo_id) + # One summary event, one user message, and one response message from the assistant + @validate_ml_event_count(count=3) + @validate_transaction_metrics( + name="test_bedrock_chat_completion_in_txn_no_convo_id", + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, + ) + @background_task(name="test_bedrock_chat_completion_in_txn_no_convo_id") + def _test(): + set_trace_info() + exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) + + _test() + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_bedrock_chat_completion_outside_txn(set_trace_info, exercise_model): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) + + +disabled_ml_settings = {"machine_learning.enabled": False, "ml_insights_events.enabled": False} + + +@override_application_settings(disabled_ml_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_bedrock_chat_completion_disabled_settings", + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, +) +@background_task(name="test_bedrock_chat_completion_disabled_settings") +def test_bedrock_chat_completion_disabled_settings(set_trace_info, exercise_model): + set_trace_info() + exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) + + +_client_error = botocore.exceptions.ClientError +_client_error_name = callable_name(_client_error) + + +@validate_error_trace_attributes( + "botocore.errorfactory:ValidationException", + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "conversation_id": "my-awesome-id", + "request_id": "f4908827-3db9-4742-9103-2bbc34578b03", + "api_key_last_four_digits": "CRET", + "request.model": "does-not-exist", + "vendor": "Bedrock", + "ingest_source": "Python", + "http.statusCode": 400, + "error.message": "The provided model identifier is invalid.", + "error.code": "ValidationException", + }, + }, +) +@background_task() +def test_bedrock_chat_completion_error_invalid_model(bedrock_server, set_trace_info): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + with pytest.raises(_client_error): + bedrock_server.invoke_model( + body=b"{}", + modelId="does-not-exist", + accept="application/json", + contentType="application/json", + ) + + +@dt_enabled +@reset_core_stats_engine() +def test_bedrock_chat_completion_error_incorrect_access_key( + monkeypatch, bedrock_server, exercise_model, set_trace_info, expected_client_error +): + @validate_error_trace_attributes( + _client_error_name, + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": expected_client_error, + }, + ) + @background_task() + def _test(): + monkeypatch.setattr(bedrock_server._request_signer._credentials, "access_key", "INVALID-ACCESS-KEY") + + with pytest.raises(_client_error): # not sure where this exception actually comes from + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + exercise_model(prompt="Invalid Token", temperature=0.7, max_tokens=100) + + _test() diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py new file mode 100644 index 0000000000..db985ee467 --- /dev/null +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -0,0 +1,159 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +from io import BytesIO + +import botocore.exceptions +import pytest +from _test_bedrock_embeddings import ( + embedding_expected_client_errors, + embedding_expected_events, + embedding_payload_templates, +) +from conftest import BOTOCORE_VERSION +from testing_support.fixtures import ( # override_application_settings, + dt_enabled, + override_application_settings, + reset_core_stats_engine, +) +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_ml_event_count import validate_ml_event_count +from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.api.background_task import background_task +from newrelic.common.object_names import callable_name + +disabled_ml_insights_settings = {"ml_insights_events.enabled": False} + + +@pytest.fixture(scope="session", params=[False, True], ids=["Bytes", "Stream"]) +def is_file_payload(request): + return request.param + + +@pytest.fixture( + scope="module", + params=[ + "amazon.titan-embed-text-v1", + "amazon.titan-embed-g1-text-02", + ], +) +def model_id(request): + return request.param + + +@pytest.fixture(scope="module") +def exercise_model(bedrock_server, model_id, is_file_payload): + payload_template = embedding_payload_templates[model_id] + + def _exercise_model(prompt, temperature=0.7, max_tokens=100): + body = (payload_template % prompt).encode("utf-8") + if is_file_payload: + body = BytesIO(body) + + response = bedrock_server.invoke_model( + body=body, + modelId=model_id, + accept="application/json", + contentType="application/json", + ) + response_body = json.loads(response.get("body").read()) + assert response_body + + return _exercise_model + + +@pytest.fixture(scope="module") +def expected_events(model_id): + return embedding_expected_events[model_id] + + +@pytest.fixture(scope="module") +def expected_client_error(model_id): + return embedding_expected_client_errors[model_id] + + +@reset_core_stats_engine() +def test_bedrock_embedding(set_trace_info, exercise_model, expected_events): + @validate_ml_events(expected_events) + @validate_ml_event_count(count=1) + @validate_transaction_metrics( + name="test_bedrock_embedding", + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, + ) + @background_task(name="test_bedrock_embedding") + def _test(): + set_trace_info() + exercise_model(prompt="This is an embedding test.") + + _test() + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_bedrock_embedding_outside_txn(exercise_model): + exercise_model(prompt="This is an embedding test.") + + +_client_error = botocore.exceptions.ClientError +_client_error_name = callable_name(_client_error) + + +@override_application_settings(disabled_ml_insights_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_bedrock_embeddings:test_bedrock_embedding_disabled_settings", + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, +) +@background_task() +def test_bedrock_embedding_disabled_settings(set_trace_info, exercise_model): + set_trace_info() + exercise_model(prompt="This is an embedding test.") + + +@dt_enabled +@reset_core_stats_engine() +def test_bedrock_embedding_error_incorrect_access_key( + monkeypatch, bedrock_server, exercise_model, set_trace_info, expected_client_error +): + @validate_error_trace_attributes( + _client_error_name, + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": expected_client_error, + }, + ) + @background_task() + def _test(): + monkeypatch.setattr(bedrock_server._request_signer._credentials, "access_key", "INVALID-ACCESS-KEY") + + with pytest.raises(_client_error): # not sure where this exception actually comes from + set_trace_info() + exercise_model(prompt="Invalid Token", temperature=0.7, max_tokens=100) + + _test() diff --git a/tests/external_boto3/test_boto3_iam.py b/tests/external_botocore/test_boto3_iam.py similarity index 95% rename from tests/external_boto3/test_boto3_iam.py rename to tests/external_botocore/test_boto3_iam.py index a2237dc936..3d672f3751 100644 --- a/tests/external_boto3/test_boto3_iam.py +++ b/tests/external_botocore/test_boto3_iam.py @@ -17,7 +17,7 @@ import boto3 import moto -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -53,7 +53,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(exact_agents={"http.url": "https://iam.amazonaws.com/"}, count=3) @validate_span_events(expected_agents=("aws.requestId",), count=3) @validate_span_events(exact_agents={"aws.operation": "CreateUser"}, count=1) diff --git a/tests/external_boto3/test_boto3_s3.py b/tests/external_botocore/test_boto3_s3.py similarity index 97% rename from tests/external_boto3/test_boto3_s3.py rename to tests/external_botocore/test_boto3_s3.py index a7ecf034ab..b6299d9f6e 100644 --- a/tests/external_boto3/test_boto3_s3.py +++ b/tests/external_botocore/test_boto3_s3.py @@ -18,7 +18,7 @@ import boto3 import botocore import moto -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -73,7 +73,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(exact_agents={"aws.operation": "CreateBucket"}, count=1) @validate_span_events(exact_agents={"aws.operation": "PutObject"}, count=1) @validate_span_events(exact_agents={"aws.operation": "ListObjects"}, count=1) diff --git a/tests/external_boto3/test_boto3_sns.py b/tests/external_botocore/test_boto3_sns.py similarity index 94% rename from tests/external_boto3/test_boto3_sns.py rename to tests/external_botocore/test_boto3_sns.py index bafe68611d..5e6c7c4b4e 100644 --- a/tests/external_boto3/test_boto3_sns.py +++ b/tests/external_botocore/test_boto3_sns.py @@ -17,7 +17,7 @@ import boto3 import moto import pytest -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -45,7 +45,7 @@ sns_metrics_phone = [("MessageBroker/SNS/Topic" "/Produce/Named/PhoneNumber", 1)] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(expected_agents=("aws.requestId",), count=2) @validate_span_events(exact_agents={"aws.operation": "CreateTopic"}, count=1) @validate_span_events(exact_agents={"aws.operation": "Publish"}, count=1) @@ -74,7 +74,7 @@ def test_publish_to_sns_topic(topic_argument): assert "MessageId" in published_message -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(expected_agents=("aws.requestId",), count=3) @validate_span_events(exact_agents={"aws.operation": "CreateTopic"}, count=1) @validate_span_events(exact_agents={"aws.operation": "Subscribe"}, count=1) diff --git a/tests/external_botocore/test_botocore_dynamodb.py b/tests/external_botocore/test_botocore_dynamodb.py index 30114d53b1..932fb1743a 100644 --- a/tests/external_botocore/test_botocore_dynamodb.py +++ b/tests/external_botocore/test_botocore_dynamodb.py @@ -17,7 +17,7 @@ import botocore.session import moto -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -63,7 +63,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(expected_agents=("aws.requestId",), count=8) @validate_span_events(exact_agents={"aws.operation": "PutItem"}, count=1) @validate_span_events(exact_agents={"aws.operation": "GetItem"}, count=1) @@ -80,7 +80,7 @@ background_task=True, ) @background_task() -@moto.mock_dynamodb2 +@moto.mock_dynamodb def test_dynamodb(): session = botocore.session.get_session() client = session.create_client( diff --git a/tests/external_botocore/test_botocore_ec2.py b/tests/external_botocore/test_botocore_ec2.py index 28a8ff63ae..3cb83e3185 100644 --- a/tests/external_botocore/test_botocore_ec2.py +++ b/tests/external_botocore/test_botocore_ec2.py @@ -17,7 +17,7 @@ import botocore.session import moto -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -55,7 +55,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(expected_agents=("aws.requestId",), count=3) @validate_span_events(exact_agents={"aws.operation": "RunInstances"}, count=1) @validate_span_events(exact_agents={"aws.operation": "DescribeInstances"}, count=1) diff --git a/tests/external_botocore/test_botocore_s3.py b/tests/external_botocore/test_botocore_s3.py index 1984d8103e..ea0c225390 100644 --- a/tests/external_botocore/test_botocore_s3.py +++ b/tests/external_botocore/test_botocore_s3.py @@ -18,7 +18,7 @@ import botocore import botocore.session import moto -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -67,7 +67,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(exact_agents={"aws.operation": "CreateBucket"}, count=1) @validate_span_events(exact_agents={"aws.operation": "PutObject"}, count=1) @validate_span_events(exact_agents={"aws.operation": "ListObjects"}, count=1) diff --git a/tests/external_botocore/test_botocore_sqs.py b/tests/external_botocore/test_botocore_sqs.py index 3f7d8c0220..63f15801b5 100644 --- a/tests/external_botocore/test_botocore_sqs.py +++ b/tests/external_botocore/test_botocore_sqs.py @@ -18,7 +18,7 @@ import botocore.session import moto import pytest -from testing_support.fixtures import override_application_settings +from testing_support.fixtures import dt_enabled from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, @@ -70,7 +70,7 @@ ] -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_span_events(exact_agents={"aws.operation": "CreateQueue"}, count=1) @validate_span_events(exact_agents={"aws.operation": "SendMessage"}, count=1) @validate_span_events(exact_agents={"aws.operation": "ReceiveMessage"}, count=1) @@ -124,7 +124,7 @@ def test_sqs(): assert resp["ResponseMetadata"]["HTTPStatusCode"] == 200 -@override_application_settings({"distributed_tracing.enabled": True}) +@dt_enabled @validate_transaction_metrics( "test_botocore_sqs:test_sqs_malformed", scoped_metrics=_sqs_scoped_metrics_malformed, diff --git a/tests/mlmodel_openai/_mock_external_openai_server.py b/tests/mlmodel_openai/_mock_external_openai_server.py new file mode 100644 index 0000000000..44cfb5d0de --- /dev/null +++ b/tests/mlmodel_openai/_mock_external_openai_server.py @@ -0,0 +1,226 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json + +from testing_support.mock_external_http_server import MockExternalHTTPServer + +# This defines an external server test apps can make requests to instead of +# the real OpenAI backend. This provides 3 features: +# +# 1) This removes dependencies on external websites. +# 2) Provides a better mechanism for making an external call in a test app than +# simple calling another endpoint the test app makes available because this +# server will not be instrumented meaning we don't have to sort through +# transactions to separate the ones created in the test app and the ones +# created by an external call. +# 3) This app runs on a separate thread meaning it won't block the test app. + +RESPONSES = { + "Invalid API key.": ( + {"Content-Type": "application/json; charset=utf-8", "x-request-id": "4f8f61a7d0401e42a6760ea2ca2049f6"}, + 401, + { + "error": { + "message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", + "type": "invalid_request_error", + "param": "null", + "code": "invalid_api_key", + } + }, + ), + "Embedded: Invalid API key.": ( + {"Content-Type": "application/json; charset=utf-8", "x-request-id": "4f8f61a7d0401e42a6760ea2ca2049f6"}, + 401, + { + "error": { + "message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + "type": "invalid_request_error", + "param": "null", + "code": "invalid_api_key", + } + }, + ), + "Model does not exist.": ( + { + "Content-Type": "application/json", + "x-request-id": "cfdf51fb795362ae578c12a21796262c", + }, + 404, + { + "error": { + "message": "The model `does-not-exist` does not exist", + "type": "invalid_request_error", + "param": "null", + "code": "model_not_found", + } + }, + ), + "This is an embedding test.": ( + { + "Content-Type": "application/json", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "54", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "150000", + "x-ratelimit-remaining-requests": "197", + "x-ratelimit-remaining-tokens": "149994", + "x-ratelimit-reset-requests": "19m45.394s", + "x-ratelimit-reset-tokens": "2ms", + "x-request-id": "c70828b2293314366a76a2b1dcb20688", + }, + 200, + { + "data": [ + { + "embedding": "SLewvFF6iztXKj07UOCQO41IorspWOk79KHuu12FrbwjqLe8FCTnvBKqj7sz6bM8qqUEvFSfITpPrJu7uOSbPM8agzyYYqM7YJl/PBF2mryNN967uRiRO9lGcbszcuq7RZIavAnnNLwWA5s8mnb1vG+UGTyqpYS846PGO2M1X7wIxAO8HfgFvc8s8LuQXPQ5qgsKPOinEL15ndY8/MrOu1LRMTxCbQS7PEYJOyMx7rwDJj+79dVjO5P4UzmoPZq8jUgivL36UjzA/Lc8Jt6Ru4bKAL1jRiM70i5VO4neUjwneAy7mlNEPBVpoDuayo28TO2KvAmBrzzwvyy8B3/KO0ZgCry3sKa6QTmPO0a1Szz46Iw87AAcPF0O5DyJVZw8Ac+Yu1y3Pbqzesw8DUDAuq8hQbyALLy7TngmPL6lETxXxLc6TzXSvKJrYLy309c8OHa0OU3NZ7vru2K8mIXUPCxrErxLU5C5s/EVPI+wjLp7BcE74TvcO+2aFrx4A9w80j+Zu/aAojwmzU08k/hTvBpL4rvHFFQ76YftutrxL7wyxgK9BsIevLkYkTq4B028OZnlPPkcgjxhzfS79oCiuB34BbwITTq97nrzOugwRzwGS1U7CqTgvFxROLx4aWG7E/DxPA3J9jwd+AU8dVWPvGlc2jzwWae57nrzu569E72GU7e8Vn9+vFLA7TtVbZE8eOCqPG+3Sjxr5/W8s+DRPE+sm7wFKKQ8A8A5vUSBVryeIxk8hsqAPAeQjryeIxm8gU/tuxVpoDxVXM250GDlOlEDwjs0t6O8Tt6rOVrGHLvmyFy6dhI7PLPxlbv3YP88B/YTPEZgCrxqKsq8Xh+ou96wQLp5rpo8LSg+vL63/rsFjqk8E/DxPEi3MDzTcw66PjcqPNgSfLwqnaK85QuxPI7iHL2+pRE8Z+ICOxzEELvph+07jHqyu2ltnrwNQMC82BL8vAOdiDwSqo88CLM/PCKFBrzmP6a85Nc7PBaM0bvh1VY7NB2pvMkF9Tx3New87mgGPAoKZjo+nS+/Rk/GucqwMz3fwYS8yrCzPMo56jyDHV08XLe9vB4+aLwXwMY8dVUPvCFATbx2eMC8V7NzvEnrpTsIxIO7yVmNu2lc2ryGQnM8A6/1PH/VFbySO6g80i5VPOY/prv6cyi7W5QMPJVP+jsyLIi84H6wPKM50DrZNIS8UEaWPPrIaTzvrmg8rcoaPRuQm7ysH9y8OxIUO7ss4zq3Od08paG6vAPAuTjYAI88/qmCuuROhbzBMK08R4M7u67+j7uClKa6/KedOsqNArzysM08QJ8UvMD8t7v5P7M799fIvAWx2jxiEi48ja6nPL0LFzxFkpq7LAWNPA1AQLyWlLO6qrfxvOGypTxJUau8aJ8uPceLnTtS0TG9omtgPO7xPDvzbfm7FfJWu2CqwzwAASk96FN4PLPgUbwRdhq8Vn9+PLk7wjs8NUW84yx9vHJCZjzysM079hodO/NbDL2BxrY6CE26OzpEpDv7DaM8y0quO41IIr1+Kte8QdMJvKlxDzy9+lI8hfyQPA3J9jzWmKS7z6O5u4a5vLtXKj088XzYO1fEtzwY4/e7Js1NugbCnjymxOu7906SvPSPAb1ieDO8dnjAu/EW0zp/b5C8mGIjvWTPWTwIxIM8YgFqPKvrZrwKpOA7/jK5O2vViDyfaXs8DR2Pu0AFGrvTc446IIOhvDreHrxRnTw8ROdbu55Gyrsht5Y8tVmAvHK5rzzZvTo8bx1QPMglmLvigBU8oIuDvAFYz7pblIw8OZnlOsTvPbxhzfS8BxnFOpkwE72E60w7cNp7utp6ZrtvHdC4uwmyO5dRX7sAm6M7kqEtvElRK7yWg++7JHanvM6ACDvrZqG8Xh+oupQsyTwkZWO8VzuBu5xVKbzEZoc7wB9pvA796zyZlpi8YbsHvQs+W7u9cZy8gKMFOxYDGzyu7Uu71KeDPJxVqbxwyI68VpDCu9VT67xKqFG7KWmtuvNteTocs0w7aJ8uPMUSbzz6cyg8MiwIPEtlfTo+wOA75tkgu7VZgDw8WPa8mGIjPKq38bsr0Zc7Ot4evNNiyju9C5c7YCENPP6pAj3uV8I7X3bOusfxIjvpZLy655bMvL9ivbxO3iu8NKbfPNe7VTz9ZMk88RZTu5QsybxeQtk7qpTAOzGSjTxSwO27mGIjPO7OC7x7FoW8wJayvI2uJzttxqk84H4wOUtlfbxblAw8uTtCPIO3Vzxkz9k8ENwfvfQYuLvHFNQ8LvatPF65ojzPLHA8+RyCvK3Kmjx27wk8Dcn2PARatDv3tBc8hkLzPEOz5jyQSoe8gU/tPMRmhzzp2wU90shPPBv2oLsNQMA8jTdevIftMTt/Xsw7MMQdPICjBT012tS7SLewvJBtuDuevZM8LyojPa6HxjtOAd07v9mGusZXqDoPqKo8qdeUvETnW7y5occ5pOSOvPPkwjsDN4O8Mk85vKnXlDtp06O7kZDpO6GuNDtRFAY9lAkYPGHNdDx2Afc7RRtROy5/5LyUoxI9mu0+u/dOEryrYrC867vivJp29TtVbZG8SVGrO0im7LnhsqU80frfPL/IwryBT+07/+/kPLZ8sTwoNbg7ZkiIOxadlbxlnUm68RbTuxkX7Tu/cwG7aqGTPO8CAbzTYsq6AIpfvA50tbzllOc7s3rMO0SBVjzXzJm8eZ3Wu4vgtzwPDrA8W6b5uwJpEzwLtaQ81pgkPJuqarxmro288369u48WkjwREBU9JP/dPJ69kzvw4t27h3bouxhrBbwrNx29F9EKPFmSJ7v8px08Tt6rvEJthLxon648UYz4u61TUTz4lPQ7ERAVuhwqFrzfSjs8RRtRO6lxD7zHelm87lfCu10O5LrXMh886YftvL9iPTxCf/E6MZKNOmAhDb2diZ47eRSgPBfRCrznlsw5MiwIvHW7FD3tI807uG3SPE7eqzx1VY864TtcO3zTMDw7EhS8c+0kPLr47TvUDQm8domEvEi3MLruaAa7tUi8u4FgsTwbkBu6pQfAvEJthLwDnQg8S1OQO55GSrxZLCK8nkZKvFXTFr01dM+8W6Z5vO+u6Luh0eW8rofGvFsdw7x7KHK8sN5svCFAzbo/0SS8f9UVu7Qli7wr0Re95E4FvSg1ODok/907AAGpPHQhGrwtS++71pgkvCtazjsSzcC7exYFPLVZgLzZmom7W6Z5PHr0fLtn9O86oUivukvcRrzjPcE8a8REPAei+zoBNZ685aUrPNBg5bqeIxk8FJuwPPdOkrtUOZy8GRftO4KD4rz/72Q7ERCVu8WJODy5O8I5L7NZuxJECjxFkpq8Uq4AOy2fh7wY9Du8GRdtu48o/7mHdug803MOvCUQIrw2hZM8v+tzvE54pruyI6a6exYFvDXrGDwNQEA8zyxwO7c53TwUJGe8Wk9Tu6ouu7yqCwo8vi7IvNe71TxB04m8domEvKTkDrzsidK8+nOovLfT1zr11eM7SVErO3EOcbzqMqw74Tvcut4WRrz5pbi8oznQvMi/Er0aS+I87lfCvK+qdztd6zI83eJQPFy3vbyACQu9/8wzO/k/s7weG7e8906SPA3J9jw8NUU8TUQxPfEWU7wjH4E8J3gMPC72LTp6SJU8exaFOXBiibyf4MS6EXYaO3DIjjy61by7ACRaO5NvnTvMGB48Dw6wPFEUBr30j4E7niMZvIZC87s7EpS8OZnlPJZxgrxug9U7/DDUvNrxL7yV14e3E2c7PBdaQTwT8HE8oIuDPGIB6rvMB9o6cR+1OwbCHrylfgm8z6M5vIiqXbxFG1G8a9WIPItp7rpGT8Y838GEvAoK5jyAG3g7xRJvPPxBGLzJWQ28XYWtO85vRLp0IZq8cR81vc7mDb28PSe89LKyuig1uDyxEuK8GlwmPIbKgLwHGcW7/qkCvC8ZXzzSyE89F8BGOxPw8Tx+Ktc8BkvVurXiNryRkOk8jyj/OcKH0zp69Pw8apDPPFuUjLwPDrC8xuBeuD43KrxuYKQ7qXGPvF0OZDx1VQ88VVzNvD9rn7ushWE7EZlLvSL9+DrHi528dzXsu3k30bzeFka7hrm8vD3gAz1/Xsy80D20PNPZE7sorAG86WS8u2Y3xDtvHVC7PKwOO5DkAT3KOeo8c+0kvI+fyLuY61k8SKbsO4TrzLrrZqE87O9XvMkF9Tynb6q847SKvBjjdzyhSK88zTtPPNNzjjsvGV87UQPCvMD8t7stn4e7GRftPBQkZ7x4eiW7sqzcu3ufO7yAG3g8OHa0u0T4n7wcxJC7r6r3vAbCnrth3rg7BxnFumqQzzyXyCi8V8Q3vEPEqjyIu6E8Ac+YvGR6GLulkHY8um83PMqNgrv5pTi8N7kIPOhTeLy6TIY8B5COvDLGArvEzAy9IbcWvIUfQjxQ4BC7B/aTvCfwfrz15ie8ucR4PD1pursLtSS8AgMOOzIsiLv0srI7Q01hPCvRF7vySsg6O5tKunh6JTvCZCI7xuDevLc53btvLhQ8/pi+PJU9Dbugi4O8Qn/xvLpMhrth3ji8n/GIPKouu7tBS3y853MbPGAQyTt27wk7iokRO8d62bzZRnG7sN5svAG+1Lqvqve8JGXjur0Ll7tCf/E75/xRPIWFx7wgDNi8ucT4OZNvHb2nktu8qrfxuyR2J7zWh2A6juKcPDhlcLx/1RU9IAxYPGJ4szylB8C8qfrFO276HjuWcQK9QdOJvCUQIjzjo8a8SeslvBrCKztCf/E66MrBOx1eCz2Xt+Q66YdtvKg9mrrLSq47fFznO1uUjDsoNTg8QyqwuzH4Ejz/Zi67A8A5uKg9GrtFkhq862ahOzSmXzkMDEs8q+vmvNVkLzwc1n28mu0+vCbekTyCg+K7ekgVvO8CAT2yRtc8apBPu1b2R7zUp4M8VW2RvPc9zrx69Hw753ObvCcSB71sG+u8OwHQuv67b7zLSi65HrWxO0ZPRrxmwPq7t7CmPGxvAzygnfC8oIsDvKY7tbwZF+07p2+qvOnbhbv0oW47/2auuThlcDwIxIM8n/EIO6ijH7vHetk7uRiRPGUDT7pgh5I85shcPpGQabykShS7FWmgPPjojDvJ8wc8mlPEOY2uJzt7FoW7HNb9O7rVvDzKjQI80NcuuqvINbvNTBO8TgFdvEJ/cbzEZoe8SVGrvMvkqLyHdui7P2ufvBSbMDw0t6O82GaUPOLmGrxSNze8KVjpuwizPzwqjN48Xh8ovE4B3TtiAeo8azsOO8eLnbyO4py7x/GiPIvgNzzvi7c8BFq0O/dOEj1fU5282ZoJPCL9+LqyIyY8IoUGPNI/mbwKpGC7EkQKuzrN2jwVzyU7QpA1vLIjpjwi64s8HYE8u6eSW7yryLU8yK5OOzysjjwi6wu8GsIrOu7xPDwCaRO8dzVsPP/vZLwT3oQ8cQ7xvOJv0TtWBww8hlM3PBPeBDxT9OK71pgkPPSysrugiwO90GDlvHOHHz3xfNg8904SPVpglzzmP6a7Cgrmu9/BBLyH7bG85QsxvVSfIb2Xt2Q8paG6vOqYsTos9Mi8nqxPu8wHWjuYhdS7GAWAvCIOvTp/bxA8j7CMPG1P4Dxd67I7xxRUvOM9wbxMhwU9Kp0iPfF82LvQYOU6XkJZPBxNx7y0nX28B5COO8FT3rp4eiW8R/oEvSfw/jtC9rq8n/GIux3nQTw8WPY8LBf6uzSmXzzSPxm88rDNvDysDjwyPnW7tdFyPBLNwDo8WHa8bPi5vOO0CrylGAQ8YgFqvEFLfDy7LOO7TIeFPAHPmDv3YP+6/+9kPBKqjzt5rpo8VJ+hvE7eKzyc3t88P2sfvLQUR7wJ1vC6exaFvD6dr7zNO888i+A3ulwuhzuF/JC8gKMFveoyLLxqBxk7YgFquws+2zwOUYS8agcZvGJ4M71AjtC747QKvAizP73UH3a7LvatPJBtuLzEzIy8bG8DvJEHM75E59s7zbIYPObZIL2uZJW7WRveugblTzy6TIa802JKvD9rH7xlA088QAWavIFP7bwL2FW8vqWRu0ZgijyRkGm7ZGnUvIeHLD1c2m48THbBPPkcAr1NzWc8+JT0uulkvLvXMp+7lU96u7kYET1xhTo8e3wKvItGPTxb+hG87mgGPWqhk7uhrrQ73rBAPCbNTT13rDW8K8DTus8s8DsNt4k8gpQmPLES4ryyvSA8lcbDO60woDyLVwE9BFq0u+cNFj3C7Vi8UXoLPDYOyryQ0z083+S1Ox34hTzEzIw7pX4Ju6ouuzxIpmw8w5iXuylYaTy5sgu9Js3NOo+fyLyjFp+8MMSdvOROBb2n+OA7b7fKOeIJzDoNpkW8WsYct7SdfTxXxLc7TO2KO3YB9zynktu7OkSkPKnXFLvtRv47AJujuzGSDT0twjg8AgOOO4d26DvpZDy8lAkYPI5r0zcGS9W8OGXwu9xIVjyH7TG9IUDNuiqMXrwb9qA79I+BPL1xHLuVPY07MOfOO0ztCruvMoW8BuXPu4AbeLyIRNg8uG3SPO5XQjuFH0K8zm9EPEAoSz0tKL652ZqJOgABqbwsjsM8mlPEPLewpjsVWNw8OGXwOlYHjLzfwQQ81iFbOyJ0Qj3d85S7cQ7xvIqswjxKhSC7906SvAFYz72xiau8LAWNPB1eCz09jGu72ZoJPfDiXTwPDrA8CYGvvNH6XzxTa6y8+RwCvY8of7xxDnG8Ef/QvJ9p+zqh0eU8a16/OzBN1LyDLiE9PFh2u+0jTbxLUxA9ZZ3JvItXgbqL4Dc8BuXPvKnXFDzmPyY8k/hTOlum+bqAksG8OZnluPmluLxRnTy6/KcdvKAUOrzRcSm8fqEgPcTeebzeOXc8KCR0OnN2W7xRA0K8Wsacu+M9wToyLIi8mTATu21P4LuadvW8Dtq6vPmlODsjqLe88ieXPJEHszySoa08U/RiPNQNCbwb9qC8bG+DOXW7FL0OdLW7Tc3nvG8dULsAJNo7fNMwO7sJMr2O4hy85ZTnuwAkWjw+Nyq8rcoaO+8lsrvx86E8U/TivGUUkzp6SJW8lT0NvWz4uTzeFka6qguKvIKD4rt/1ZU8LBf6vD6dr7es/Ko7qWBLvIlVHDxwUUU6Jt4RvRJEijnRcSk88235PGvVCL3zbfm8DaZFO+7xvLs3qES8oznQO9XKNDxZLKK8IIMhvComWb0CAw48fDk2O+nbBb29C5e8ogVbu1EUBryYhdS7OTPgOul1AD25sgs7i1cBPBYmzLtSroA8hfyQvP3bErz9h/o82ZoJO7/ZhjxtT+A8UZ28uzaFk7wJ1nA6dd7FPGg5Kbwb9iC8psRrvBXyVjzGRuS8uAfNu0+smzvFAAK96FN4vC2fhzy65oC7tgXou/9mLjxMELw8GSgxPRBlVjxDxCq80j8ZveinkDxHgzu70j8ZvPGNnDyPn0i8Vn9+urXR8ju10fI7sRJiPDBemLt8OTa8tJ39O4ne0rsaXKa7t0ohPHQhGrdYXjI824sqvDw1RT2/2YY8E/BxPIUOfjv9dQ08PM8/PMwYHrwwXpi7nqxPPM8aA7w+wOC7ROdbO79iPTxVbRE8U45dPOOjRjxwYok8ME1Uu1SfIbyifKQ8UXqLPI85wzsITTq8R+lAPMRVQzzcv58892B/Oqg9mjw3MXu7P9EkvM6AiLyx7zA8eHolPLYWLLugFLq8AJsjvEOzZjk6RKQ8uRgRPXVVjzw0HSk9PWk6PLss47spzzK93rBAvJpTxDun+OC7OTPgvEa1yzvAH+k5fZDcOid4jLuN0di8N7kIPPe0F7wVaSC8zxoDvJVgvrvUpwO9dd7FPKUHQLxn4oI7Ng7KPIydYzzZRvE8LTkCu3bvCTy10fK7QAWaPGHeOLu6+O27omvgO8Rmh7xrXj87AzeDvORg8jnGRuS8UEYWPLPg0TvYZpQ9FJuwPLC7O7xug1U8bvoevAnW8DvxFtM8kEoHPDxYdrzcWZq8n3q/O94nCjvZI0C82yUlvayWpbyHh6y7ME1UO9b+KTzbFGG89oCiPFpgFzzhTKA84gnMPKgsVjyia+C7XNpuPHxc5zyDLqG8ukyGvKqUQLwG5U88wB/pO+B+ML2O4py8MOdOPHt8irsDnYg6rv6PumJ4szzuV0I80qWePKTkDj14A9y8fqEgu9DXLjykbUU7yEhJvLYFaLyfVw68", + "index": 0, + "object": "embedding", + } + ], + "model": "text-embedding-ada-002-v2", + "object": "list", + "usage": {"prompt_tokens": 6, "total_tokens": 6}, + }, + ), + "You are a scientist.": ( + { + "Content-Type": "application/json", + "openai-model": "gpt-3.5-turbo-0613", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "1469", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "40000", + "x-ratelimit-remaining-requests": "199", + "x-ratelimit-remaining-tokens": "39940", + "x-ratelimit-reset-requests": "7m12s", + "x-ratelimit-reset-tokens": "90ms", + "x-request-id": "49dbbffbd3c3f4612aa48def69059ccd", + }, + 200, + { + "choices": [ + { + "finish_reason": "stop", + "index": 0, + "message": { + "content": "212 degrees " "Fahrenheit is " "equal to 100 " "degrees " "Celsius.", + "role": "assistant", + }, + } + ], + "created": 1696888863, + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv", + "model": "gpt-3.5-turbo-0613", + "object": "chat.completion", + "usage": {"completion_tokens": 11, "prompt_tokens": 53, "total_tokens": 64}, + }, + ), + "You are a mathematician.": ( + { + "Content-Type": "application/json", + "openai-model": "gpt-3.5-turbo-0613", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "1469", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "40000", + "x-ratelimit-remaining-requests": "199", + "x-ratelimit-remaining-tokens": "39940", + "x-ratelimit-reset-requests": "7m12s", + "x-ratelimit-reset-tokens": "90ms", + "x-request-id": "49dbbffbd3c3f4612aa48def69059aad", + }, + 200, + { + "choices": [ + { + "finish_reason": "stop", + "index": 0, + "message": { + "content": "1 plus 2 is 3.", + "role": "assistant", + }, + } + ], + "created": 1696888865, + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat", + "model": "gpt-3.5-turbo-0613", + "object": "chat.completion", + "usage": {"completion_tokens": 11, "prompt_tokens": 53, "total_tokens": 64}, + }, + ), +} + + +def simple_get(self): + content_len = int(self.headers.get("content-length")) + content = json.loads(self.rfile.read(content_len).decode("utf-8")) + + prompt = extract_shortened_prompt(content) + if not prompt: + self.send_response(500) + self.end_headers() + self.wfile.write("Could not parse prompt.".encode("utf-8")) + return + + headers, response = ({}, "") + for k, v in RESPONSES.items(): + if prompt.startswith(k): + headers, status_code, response = v + break + else: # If no matches found + self.send_response(500) + self.end_headers() + self.wfile.write(("Unknown Prompt:\n%s" % prompt).encode("utf-8")) + return + + # Send response code + self.send_response(status_code) + + # Send headers + for k, v in headers.items(): + self.send_header(k, v) + self.end_headers() + + # Send response body + self.wfile.write(json.dumps(response).encode("utf-8")) + return + + +def extract_shortened_prompt(content): + prompt = ( + content.get("prompt", None) + or content.get("input", None) + or "\n".join(m["content"] for m in content.get("messages")) + ) + return prompt.lstrip().split("\n")[0] + + +class MockExternalOpenAIServer(MockExternalHTTPServer): + # To use this class in a test one needs to start and stop this server + # before and after making requests to the test app that makes the external + # calls. + + def __init__(self, handler=simple_get, port=None, *args, **kwargs): + super(MockExternalOpenAIServer, self).__init__(handler=handler, port=port, *args, **kwargs) + + +if __name__ == "__main__": + with MockExternalOpenAIServer() as server: + print("MockExternalOpenAIServer serving on port %s" % str(server.port)) + while True: + pass # Serve forever diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py new file mode 100644 index 0000000000..4513be742d --- /dev/null +++ b/tests/mlmodel_openai/conftest.py @@ -0,0 +1,156 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import pytest +from _mock_external_openai_server import ( + MockExternalOpenAIServer, + extract_shortened_prompt, +) +from testing_support.fixture.event_loop import ( # noqa: F401; pylint: disable=W0611 + event_loop as loop, +) +from testing_support.fixtures import ( # noqa: F401, pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) + +from newrelic.api.time_trace import current_trace +from newrelic.api.transaction import current_transaction +from newrelic.common.object_wrapper import wrap_function_wrapper + +_default_settings = { + "transaction_tracer.explain_threshold": 0.0, + "transaction_tracer.transaction_threshold": 0.0, + "transaction_tracer.stack_trace_threshold": 0.0, + "debug.log_data_collector_payloads": True, + "debug.record_transaction_failure": True, + "ml_insights_events.enabled": True, +} + +collector_agent_registration = collector_agent_registration_fixture( + app_name="Python Agent Test (mlmodel_openai)", + default_settings=_default_settings, + linked_applications=["Python Agent Test (mlmodel_openai)"], +) + +OPENAI_AUDIT_LOG_FILE = os.path.join(os.path.realpath(os.path.dirname(__file__)), "openai_audit.log") +OPENAI_AUDIT_LOG_CONTENTS = {} + + +@pytest.fixture +def set_trace_info(): + def set_info(): + txn = current_transaction() + if txn: + txn._trace_id = "trace-id" + trace = current_trace() + if trace: + trace.guid = "span-id" + + return set_info + + +@pytest.fixture(autouse=True, scope="session") +def openai_server(): + """ + This fixture will either create a mocked backend for testing purposes, or will + set up an audit log file to log responses of the real OpenAI backend to a file. + The behavior can be controlled by setting NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES=1 as + an environment variable to run using the real OpenAI backend. (Default: mocking) + """ + import openai + + from newrelic.core.config import _environ_as_bool + + if not _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): + # Use mocked OpenAI backend and prerecorded responses + with MockExternalOpenAIServer() as server: + openai.api_base = "http://localhost:%d" % server.port + openai.api_key = "NOT-A-REAL-SECRET" + yield + else: + # Use real OpenAI backend and record responses + openai.api_key = os.environ.get("OPENAI_API_KEY", "") + if not openai.api_key: + raise RuntimeError("OPENAI_API_KEY environment variable required.") + + # Apply function wrappers to record data + wrap_function_wrapper("openai.api_requestor", "APIRequestor.request", wrap_openai_api_requestor_request) + wrap_function_wrapper( + "openai.api_requestor", "APIRequestor._interpret_response", wrap_openai_api_requestor_interpret_response + ) + yield # Run tests + + # Write responses to audit log + with open(OPENAI_AUDIT_LOG_FILE, "w") as audit_log_fp: + json.dump(OPENAI_AUDIT_LOG_CONTENTS, fp=audit_log_fp, indent=4) + + +# Intercept outgoing requests and log to file for mocking +RECORDED_HEADERS = set(["x-request-id", "content-type"]) + + +def wrap_openai_api_requestor_interpret_response(wrapped, instance, args, kwargs): + rbody, rcode, rheaders = bind_request_interpret_response_params(*args, **kwargs) + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + rheaders.items(), + ) + ) + + if rcode >= 400 or rcode < 200: + rbody = json.loads(rbody) + OPENAI_AUDIT_LOG_CONTENTS["error"] = headers, rcode, rbody # Append response data to audit log + return wrapped(*args, **kwargs) + + +def wrap_openai_api_requestor_request(wrapped, instance, args, kwargs): + params = bind_request_params(*args, **kwargs) + if not params: + return wrapped(*args, **kwargs) + + prompt = extract_shortened_prompt(params) + + # Send request + result = wrapped(*args, **kwargs) + + # Clean up data + data = result[0].data + headers = result[0]._headers + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + headers.items(), + ) + ) + + # Log response + OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, result.http_status, data # Append response data to audit log + return result + + +def bind_request_params(method, url, params=None, *args, **kwargs): + return params + + +def bind_request_interpret_response_params(result, stream): + return result.content.decode("utf-8"), result.status_code, result.headers diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py new file mode 100644 index 0000000000..6f3762a826 --- /dev/null +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -0,0 +1,347 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +from testing_support.fixtures import ( + override_application_settings, + reset_core_stats_engine, +) +from testing_support.validators.validate_ml_event_count import validate_ml_event_count +from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.api.background_task import background_task +from newrelic.api.transaction import add_custom_attribute + +disabled_ml_insights_settings = {"ml_insights_events.enabled": False} + +_test_openai_chat_completion_messages = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) + +chat_completion_recorded_events = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.model": "gpt-3.5-turbo-0613", + "response.organization": "new-relic-nkmd8b", + "response.usage.completion_tokens": 11, + "response.usage.total_tokens": 64, + "response.usage.prompt_tokens": 53, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop", + "response.api_type": "None", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 40000, + "response.headers.ratelimitResetTokens": "90ms", + "response.headers.ratelimitResetRequests": "7m12s", + "response.headers.ratelimitRemainingTokens": 39940, + "response.headers.ratelimitRemainingRequests": 199, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 3, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-0", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "You are a scientist.", + "role": "system", + "completion_id": None, + "sequence": 0, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-1", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 1, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-2", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 2, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@reset_core_stats_engine() +@validate_ml_events(chat_completion_recorded_events) +# One summary event, one system message, one user message, and one response message from the assistant +@validate_ml_event_count(count=4) +@validate_transaction_metrics( + name="test_chat_completion:test_openai_chat_completion_sync_in_txn_with_convo_id", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +chat_completion_recorded_events_no_convo_id = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.model": "gpt-3.5-turbo-0613", + "response.organization": "new-relic-nkmd8b", + "response.usage.completion_tokens": 11, + "response.usage.total_tokens": 64, + "response.usage.prompt_tokens": 53, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop", + "response.api_type": "None", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 40000, + "response.headers.ratelimitResetTokens": "90ms", + "response.headers.ratelimitResetRequests": "7m12s", + "response.headers.ratelimitRemainingTokens": 39940, + "response.headers.ratelimitRemainingRequests": 199, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 3, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-0", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "You are a scientist.", + "role": "system", + "completion_id": None, + "sequence": 0, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-1", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 1, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-2", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": None, + "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 2, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@reset_core_stats_engine() +@validate_ml_events(chat_completion_recorded_events_no_convo_id) +# One summary event, one system message, one user message, and one response message from the assistant +@validate_ml_event_count(count=4) +@background_task() +def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info): + set_trace_info() + openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_openai_chat_completion_sync_outside_txn(): + add_custom_attribute("conversation_id", "my-awesome-id") + openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@override_application_settings(disabled_ml_insights_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_chat_completion:test_openai_chat_completion_sync_ml_insights_disabled", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_sync_ml_insights_disabled(set_trace_info): + set_trace_info() + openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@reset_core_stats_engine() +@validate_ml_events(chat_completion_recorded_events_no_convo_id) +@validate_ml_event_count(count=4) +@background_task() +def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info): + set_trace_info() + + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@reset_core_stats_engine() +@validate_ml_events(chat_completion_recorded_events) +@validate_ml_event_count(count=4) +@validate_transaction_metrics( + name="test_chat_completion:test_openai_chat_completion_async_conversation_id_set", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_async_conversation_id_set(loop, set_trace_info): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_openai_chat_completion_async_outside_transaction(loop): + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@override_application_settings(disabled_ml_insights_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_chat_completion:test_openai_chat_completion_async_disabled_ml_settings", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_async_disabled_ml_settings(loop): + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) diff --git a/tests/mlmodel_openai/test_chat_completion_error.py b/tests/mlmodel_openai/test_chat_completion_error.py new file mode 100644 index 0000000000..c826b0b324 --- /dev/null +++ b/tests/mlmodel_openai/test_chat_completion_error.py @@ -0,0 +1,328 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +import pytest +from testing_support.fixtures import dt_enabled, reset_core_stats_engine +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_span_events import validate_span_events + +from newrelic.api.background_task import background_task +from newrelic.common.object_names import callable_name + +_test_openai_chat_completion_messages = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) + + +# Sync tests: + + +# No model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 2, + "error.param": "engine", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Must provide an 'engine' or 'model' parameter to create a ", + } +) +@background_task() +def test_chat_completion_invalid_request_error_no_model(): + with pytest.raises(openai.InvalidRequestError): + openai.ChatCompletion.create( + # no model provided, + messages=_test_openai_chat_completion_messages, + temperature=0.7, + max_tokens=100, + ) + + +# Invalid model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.model": "does-not-exist", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@background_task() +def test_chat_completion_invalid_request_error_invalid_model(): + with pytest.raises(openai.InvalidRequestError): + openai.ChatCompletion.create( + model="does-not-exist", + messages=({"role": "user", "content": "Model does not exist."},), + temperature=0.7, + max_tokens=100, + ) + + +# No api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "request.model": "gpt-3.5-turbo", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", + } +) +@background_task() +def test_chat_completion_authentication_error(monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None + openai.ChatCompletion.create( + model="gpt-3.5-turbo", + messages=_test_openai_chat_completion_messages, + temperature=0.7, + max_tokens=100, + ) + + +# Wrong api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-BEEF", + "request.model": "gpt-3.5-turbo", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 1, + "http.statusCode": 401, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@background_task() +def test_chat_completion_wrong_api_key_error(monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" + openai.ChatCompletion.create( + model="gpt-3.5-turbo", + messages=({"role": "user", "content": "Invalid API key."},), + temperature=0.7, + max_tokens=100, + ) + + +# Async tests: + + +# No model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 2, + "error.param": "engine", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Must provide an 'engine' or 'model' parameter to create a ", + } +) +@background_task() +def test_chat_completion_invalid_request_error_no_model_async(loop): + with pytest.raises(openai.InvalidRequestError): + loop.run_until_complete( + openai.ChatCompletion.acreate( + # no model provided, + messages=_test_openai_chat_completion_messages, + temperature=0.7, + max_tokens=100, + ) + ) + + +# Invalid model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.model": "does-not-exist", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@background_task() +def test_chat_completion_invalid_request_error_invalid_model_async(loop): + with pytest.raises(openai.InvalidRequestError): + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="does-not-exist", + messages=({"role": "user", "content": "Model does not exist."},), + temperature=0.7, + max_tokens=100, + ) + ) + + +# No api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "request.model": "gpt-3.5-turbo", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", + } +) +@background_task() +def test_chat_completion_authentication_error_async(loop, monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +# Wrong api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-BEEF", + "request.model": "gpt-3.5-turbo", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 1, + "http.statusCode": 401, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@background_task() +def test_chat_completion_wrong_api_key_error_async(loop, monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" + loop.run_until_complete( + openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", + messages=({"role": "user", "content": "Invalid API key."},), + temperature=0.7, + max_tokens=100, + ) + ) diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py new file mode 100644 index 0000000000..180052b0de --- /dev/null +++ b/tests/mlmodel_openai/test_embeddings.py @@ -0,0 +1,143 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +from testing_support.fixtures import ( # override_application_settings, + override_application_settings, + reset_core_stats_engine, +) +from testing_support.validators.validate_ml_event_count import validate_ml_event_count +from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.api.background_task import background_task + +disabled_ml_insights_settings = {"ml_insights_events.enabled": False} + + +embedding_recorded_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": None, + "span_id": "span-id", + "trace_id": "trace-id", + "input": "This is an embedding test.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "response.model": "text-embedding-ada-002-v2", + "request.model": "text-embedding-ada-002", + "request_id": "c70828b2293314366a76a2b1dcb20688", + "response.organization": "new-relic-nkmd8b", + "response.usage.total_tokens": 6, + "response.usage.prompt_tokens": 6, + "response.api_type": "None", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 150000, + "response.headers.ratelimitResetTokens": "2ms", + "response.headers.ratelimitResetRequests": "19m45.394s", + "response.headers.ratelimitRemainingTokens": 149994, + "response.headers.ratelimitRemainingRequests": 197, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@reset_core_stats_engine() +@validate_ml_events(embedding_recorded_events) +@validate_ml_event_count(count=1) +@validate_transaction_metrics( + name="test_embeddings:test_openai_embedding_sync", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_sync(set_trace_info): + set_trace_info() + openai.Embedding.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_openai_embedding_sync_outside_txn(): + openai.Embedding.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@override_application_settings(disabled_ml_insights_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_embeddings:test_openai_chat_completion_sync_disabled_settings", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_sync_disabled_settings(set_trace_info): + set_trace_info() + openai.Embedding.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@reset_core_stats_engine() +@validate_ml_events(embedding_recorded_events) +@validate_ml_event_count(count=1) +@validate_transaction_metrics( + name="test_embeddings:test_openai_embedding_async", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_async(loop, set_trace_info): + set_trace_info() + + loop.run_until_complete( + openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") + ) + + +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +def test_openai_embedding_async_outside_transaction(loop): + loop.run_until_complete( + openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") + ) + + +@override_application_settings(disabled_ml_insights_settings) +@reset_core_stats_engine() +@validate_ml_event_count(count=0) +@validate_transaction_metrics( + name="test_embeddings:test_openai_embedding_async_disabled_ml_insights_events", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_async_disabled_ml_insights_events(loop): + loop.run_until_complete( + openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") + ) diff --git a/tests/mlmodel_openai/test_embeddings_error.py b/tests/mlmodel_openai/test_embeddings_error.py new file mode 100644 index 0000000000..35d189ff50 --- /dev/null +++ b/tests/mlmodel_openai/test_embeddings_error.py @@ -0,0 +1,264 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +import pytest +from testing_support.fixtures import dt_enabled, reset_core_stats_engine +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_span_events import validate_span_events + +from newrelic.api.background_task import background_task +from newrelic.common.object_names import callable_name + +# Sync tests: + + +# No model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "vendor": "openAI", + "ingest_source": "Python", + "error.param": "engine", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Must provide an 'engine' or 'model' parameter to create a ", + } +) +@background_task() +def test_embeddings_invalid_request_error_no_model(): + with pytest.raises(openai.InvalidRequestError): + openai.Embedding.create( + input="This is an embedding test with no model.", + # no model provided + ) + + +# Invalid model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.model": "does-not-exist", + "vendor": "openAI", + "ingest_source": "Python", + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + # "http.statusCode": 404, + } +) +@background_task() +def test_embeddings_invalid_request_error_invalid_model(): + with pytest.raises(openai.InvalidRequestError): + openai.Embedding.create(input="Model does not exist.", model="does-not-exist") + + +# No api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "request.model": "text-embedding-ada-002", + "vendor": "openAI", + "ingest_source": "Python", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", + } +) +@background_task() +def test_embeddings_authentication_error(monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None + openai.Embedding.create(input="Invalid API key.", model="text-embedding-ada-002") + + +# Wrong api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-BEEF", + "request.model": "text-embedding-ada-002", + "vendor": "openAI", + "ingest_source": "Python", + "http.statusCode": 401, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@background_task() +def test_embeddings_wrong_api_key_error(monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" + openai.Embedding.create(input="Embedded: Invalid API key.", model="text-embedding-ada-002") + + +# Async tests: + + +# No model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "vendor": "openAI", + "ingest_source": "Python", + "error.param": "engine", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Must provide an 'engine' or 'model' parameter to create a ", + } +) +@background_task() +def test_embeddings_invalid_request_error_no_model_async(loop): + with pytest.raises(openai.InvalidRequestError): + loop.run_until_complete( + openai.Embedding.acreate( + input="This is an embedding test with no model.", + # No model provided + ) + ) + + +# Invalid model provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.InvalidRequestError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-CRET", + "request.model": "does-not-exist", + "vendor": "openAI", + "ingest_source": "Python", + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@background_task() +def test_embeddings_invalid_request_error_invalid_model_async(loop): + with pytest.raises(openai.InvalidRequestError): + loop.run_until_complete(openai.Embedding.acreate(input="Model does not exist.", model="does-not-exist")) + + +# No api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "request.model": "text-embedding-ada-002", + "vendor": "openAI", + "ingest_source": "Python", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", + } +) +@background_task() +def test_embeddings_authentication_error_async(loop, monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None + loop.run_until_complete(openai.Embedding.acreate(input="Invalid API key.", model="text-embedding-ada-002")) + + +# Wrong api_key provided +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.error.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "api_key_last_four_digits": "sk-BEEF", + "request.model": "text-embedding-ada-002", + "vendor": "openAI", + "ingest_source": "Python", + "http.statusCode": 401, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@background_task() +def test_embeddings_wrong_api_key_error_async(loop, monkeypatch): + with pytest.raises(openai.error.AuthenticationError): + monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" + loop.run_until_complete( + openai.Embedding.acreate(input="Embedded: Invalid API key.", model="text-embedding-ada-002") + ) diff --git a/tests/mlmodel_openai/test_get_llm_message_ids.py b/tests/mlmodel_openai/test_get_llm_message_ids.py new file mode 100644 index 0000000000..e20245128e --- /dev/null +++ b/tests/mlmodel_openai/test_get_llm_message_ids.py @@ -0,0 +1,234 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +from testing_support.fixtures import reset_core_stats_engine +from testing_support.validators.validate_ml_event_count import validate_ml_event_count + +from newrelic.api.background_task import background_task +from newrelic.api.ml_model import get_llm_message_ids, record_llm_feedback_event +from newrelic.api.transaction import add_custom_attribute, current_transaction + +_test_openai_chat_completion_messages_1 = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) +_test_openai_chat_completion_messages_2 = ( + {"role": "system", "content": "You are a mathematician."}, + {"role": "user", "content": "What is 1 plus 2?"}, +) +expected_message_ids_1 = [ + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-1", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-2", + }, +] + +expected_message_ids_1_no_conversation_id = [ + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-0", + }, + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-1", + }, + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059ccd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTemv-2", + }, +] +expected_message_ids_2 = [ + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-1", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-2", + }, +] +expected_message_ids_2_no_conversation_id = [ + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-0", + }, + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-1", + }, + { + "conversation_id": "", + "request_id": "49dbbffbd3c3f4612aa48def69059aad", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-2", + }, +] + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_when_nr_message_ids_not_set(): + message_ids = get_llm_message_ids("request-id-1") + assert message_ids == [] + + +@reset_core_stats_engine() +def test_get_llm_message_ids_outside_transaction(): + message_ids = get_llm_message_ids("request-id-1") + assert message_ids == [] + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_mulitple_async(loop, set_trace_info): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + async def _run(): + res1 = await openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + res2 = await openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + return [res1, res2] + + results = loop.run_until_complete(_run()) + + message_ids = [m for m in get_llm_message_ids(results[0].id)] + assert message_ids == expected_message_ids_1 + + message_ids = [m for m in get_llm_message_ids(results[1].id)] + assert message_ids == expected_message_ids_2 + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_mulitple_async_no_conversation_id(loop, set_trace_info): + set_trace_info() + + async def _run(): + res1 = await openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + res2 = await openai.ChatCompletion.acreate( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + return [res1, res2] + + results = loop.run_until_complete(_run()) + + message_ids = [m for m in get_llm_message_ids(results[0].id)] + assert message_ids == expected_message_ids_1_no_conversation_id + + message_ids = [m for m in get_llm_message_ids(results[1].id)] + assert message_ids == expected_message_ids_2_no_conversation_id + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +# Three chat completion messages and one chat completion summary for each create call (8 in total) +# Three feedback events for the first create call +@validate_ml_event_count(11) +@background_task() +def test_get_llm_message_ids_mulitple_sync(set_trace_info): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + results = openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_1 + + for message_id in message_ids: + record_llm_feedback_event( + category="informative", + rating=1, + message_id=message_id.get("message_id"), + request_id=message_id.get("request_id"), + conversation_id=message_id.get("conversation_id"), + ) + + results = openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_2 + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +@validate_ml_event_count(11) +@background_task() +def test_get_llm_message_ids_mulitple_sync_no_conversation_id(set_trace_info): + set_trace_info() + + results = openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_1_no_conversation_id + + for message_id in message_ids: + record_llm_feedback_event( + category="informative", + rating=1, + message_id=message_id.get("message_id"), + request_id=message_id.get("request_id"), + conversation_id=message_id.get("conversation_id"), + ) + + results = openai.ChatCompletion.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_2_no_conversation_id + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids diff --git a/tests/testing_support/validators/validate_ml_event_payload.py b/tests/testing_support/validators/validate_ml_event_payload.py index 4d43cbb22e..9933b85f6d 100644 --- a/tests/testing_support/validators/validate_ml_event_payload.py +++ b/tests/testing_support/validators/validate_ml_event_payload.py @@ -41,23 +41,36 @@ def payload_to_ml_events(payload): else: message = payload - resource_logs = message.get("resource_logs") - assert len(resource_logs) == 1 - resource_logs = resource_logs[0] - resource = resource_logs.get("resource") - assert resource and resource.get("attributes")[0] == { - "key": "instrumentation.provider", - "value": {"string_value": "newrelic-opentelemetry-python-ml"}, - } - scope_logs = resource_logs.get("scope_logs") - assert len(scope_logs) == 1 - scope_logs = scope_logs[0] - - scope = scope_logs.get("scope") - assert scope is None - logs = scope_logs.get("log_records") - - return logs + inference_logs = [] + apm_logs = [] + resource_log_records = message.get("resource_logs") + for resource_logs in resource_log_records: + resource = resource_logs.get("resource") + assert resource + resource_attrs = resource.get("attributes") + assert { + "key": "instrumentation.provider", + "value": {"string_value": "newrelic-opentelemetry-python-ml"}, + } in resource_attrs + scope_logs = resource_logs.get("scope_logs") + assert len(scope_logs) == 1 + scope_logs = scope_logs[0] + + scope = scope_logs.get("scope") + assert scope is None + logs = scope_logs.get("log_records") + event_name = get_event_name(logs) + if event_name == "InferenceEvent": + inference_logs = logs + else: + # Make sure apm entity attrs are present on the resource. + expected_apm_keys = ("entity.type", "entity.name", "entity.guid", "hostname", "instrumentation.provider") + assert all(attr["key"] in expected_apm_keys for attr in resource_attrs) + assert all(attr["value"] not in ("", None) for attr in resource_attrs) + + apm_logs = logs + + return inference_logs, apm_logs def validate_ml_event_payload(ml_events=None): @@ -86,19 +99,34 @@ def _bind_params(method, payload=(), *args, **kwargs): assert recorded_ml_events decoded_payloads = [payload_to_ml_events(payload) for payload in recorded_ml_events] - all_logs = [] - for sent_logs in decoded_payloads: - for data_point in sent_logs: - for key in ("time_unix_nano",): - assert key in data_point, "Invalid log format. Missing key: %s" % key + decoded_inference_payloads = [payload[0] for payload in decoded_payloads] + decoded_apm_payloads = [payload[1] for payload in decoded_payloads] + all_apm_logs = normalize_logs(decoded_apm_payloads) + all_inference_logs = normalize_logs(decoded_inference_payloads) + + for expected_event in ml_events.get("inference", []): + assert expected_event in all_inference_logs, "%s Not Found. Got: %s" % (expected_event, all_inference_logs) + for expected_event in ml_events.get("apm", []): + assert expected_event in all_apm_logs, "%s Not Found. Got: %s" % (expected_event, all_apm_logs) + return val + + return _validate_wrapper + + +def normalize_logs(decoded_payloads): + all_logs = [] + for sent_logs in decoded_payloads: + for data_point in sent_logs: + for key in ("time_unix_nano",): + assert key in data_point, "Invalid log format. Missing key: %s" % key all_logs.append( {attr["key"]: attribute_to_value(attr["value"]) for attr in (data_point.get("attributes") or [])} ) + return all_logs - for expected_event in ml_events: - assert expected_event in all_logs, "%s Not Found. Got: %s" % (expected_event, all_logs) - return val - - return _validate_wrapper +def get_event_name(logs): + for attr in logs[0]["attributes"]: + if attr["key"] == "event.name": + return attr["value"]["string_value"] diff --git a/tests/testing_support/validators/validate_ml_events.py b/tests/testing_support/validators/validate_ml_events.py index 251e8dbe79..275a9b2e1b 100644 --- a/tests/testing_support/validators/validate_ml_events.py +++ b/tests/testing_support/validators/validate_ml_events.py @@ -24,7 +24,6 @@ def validate_ml_events(events): @function_wrapper def _validate_wrapper(wrapped, instance, args, kwargs): - record_called = [] recorded_events = [] @@ -55,7 +54,7 @@ def _validate_ml_events(wrapped, instance, args, kwargs): for captured in found_events: if _check_event_attributes(expected, captured, mismatches): matching_ml_events += 1 - assert matching_ml_events == 1, _event_details(matching_ml_events, events, mismatches) + assert matching_ml_events == 1, _event_details(matching_ml_events, found_events, mismatches) return val diff --git a/tox.ini b/tox.ini index bdcaa745ab..25f602d455 100644 --- a/tox.ini +++ b/tox.ini @@ -97,7 +97,6 @@ envlist = redis-datastore_redis-{py37,py38,py39,py310,py311,pypy38}-redis{0400,latest}, rediscluster-datastore_rediscluster-{py37,py311,pypy38}-redis{latest}, python-datastore_sqlite-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}, - python-external_boto3-{py27,py37,py38,py39,py310,py311}-boto01, python-external_botocore-{py37,py38,py39,py310,py311}-botocorelatest, python-external_botocore-{py311}-botocore128, python-external_botocore-py310-botocore0125, @@ -140,6 +139,7 @@ envlist = python-framework_starlette-{py37,py38}-starlette{002001}, python-framework_starlette-{py37,py38,py39,py310,py311,pypy38}-starlettelatest, python-framework_strawberry-{py37,py38,py39,py310,py311}-strawberrylatest, + python-mlmodel_openai-{py37,py38,py39,py310,py311,pypy38}, python-logger_logging-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}, python-logger_loguru-{py37,py38,py39,py310,py311,pypy38}-logurulatest, python-logger_loguru-py39-loguru{06,05}, @@ -251,15 +251,13 @@ deps = datastore_redis-redislatest: redis datastore_rediscluster-redislatest: redis datastore_redis-redis0400: redis<4.1 - external_boto3-boto01: boto3<2.0 - external_boto3-boto01: moto<2.0 - external_boto3-py27: rsa<4.7.1 external_botocore-botocorelatest: botocore + external_botocore-botocorelatest: boto3 external_botocore-botocore128: botocore<1.29 external_botocore-botocore0125: botocore<1.26 - external_botocore-{py37,py38,py39,py310,py311}: moto[awslambda,ec2,iam]<3.0 + external_botocore-{py37,py38,py39,py310,py311}: moto[awslambda,ec2,iam,sqs] external_botocore-py27: rsa<4.7.1 - external_botocore-py27: moto[awslambda,ec2,iam]<2.0 + external_botocore-py27: moto[awslambda,ec2,iam,sqs]<2.0 external_feedparser-feedparser05: feedparser<6 external_feedparser-feedparser06: feedparser<7 external_httplib2: httplib2<1.0 @@ -343,6 +341,8 @@ deps = framework_tornado: pycurl framework_tornado-tornadolatest: tornado framework_tornado-tornadomaster: https://github.com/tornadoweb/tornado/archive/master.zip + mlmodel_openai: openai[datalib]<1.0 + mlmodel_openai: protobuf logger_loguru-logurulatest: loguru logger_loguru-loguru06: loguru<0.7 logger_loguru-loguru05: loguru<0.6 @@ -437,7 +437,6 @@ changedir = datastore_redis: tests/datastore_redis datastore_rediscluster: tests/datastore_rediscluster datastore_sqlite: tests/datastore_sqlite - external_boto3: tests/external_boto3 external_botocore: tests/external_botocore external_feedparser: tests/external_feedparser external_http: tests/external_http @@ -462,6 +461,7 @@ changedir = framework_starlette: tests/framework_starlette framework_strawberry: tests/framework_strawberry framework_tornado: tests/framework_tornado + mlmodel_openai: tests/mlmodel_openai logger_logging: tests/logger_logging logger_loguru: tests/logger_loguru logger_structlog: tests/logger_structlog From ca1d093353ebeb8222c2627f42e918fa74a766a5 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Tue, 14 Nov 2023 15:27:37 -0800 Subject: [PATCH 002/199] Merge improved-record-ml-event into develop-ai-limited-preview (#976) * Add truncation for ML events. (#943) * Add 4096 char truncation for ML events. * Add max attr check. * Fixup. * Fix character length ml event test. * Ignore test_ml_events.py for Py2. * Cleanup custom event if checks. * Add import statement. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Attach ml_event to APM entity by default (#940) * Attach non InferenceEvents to APM entity * Validate both resource payloads * Add tests for non-inference events * Add OpenAI sync embedding instrumentation (#938) * Add sync instrumentation for OpenAI embeddings. * Remove comments. * Clean up embedding event dictionary. * Update response_time to duration. * Linting fixes. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek * Fixup: test names --------- Co-authored-by: Uma Annamalai Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add truncation support for ML events recorded outside txns. (#949) * Add ml tests for outside transaction. * Update validator. * Add ML flag to application code path for record_ml_event. * Add NEW_RELIC_ML_INSIGHTS_EVENTS_ENABLED env var * Fix botocore tests (#973) * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Initial bedrock error tracing commit * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto * Fix botocore tests & re-structure * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Tim Pansino * Package Version Performance Regression (#970) * Fix package version performance regression * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Skip test in python 2 --------- Co-authored-by: Hannah Stepanek * Add new config setting max_attribute_value * Use new config setting * Add env var * Convert " " => * Use MAX_ATTRIBUTE_VALUE & cap at 4095 * Fixup: use min not max * Add max_attribute_value setting (#975) * Drop python 3.7 tests for Hypercorn (#954) * Fix pyenv installation for devcontainer (#936) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove duplicate kafka import hook (#956) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Handle 0.32.0.post1 version in tests (#963) * Fix botocore tests (#973) * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Initial bedrock error tracing commit * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto * Fix botocore tests & re-structure * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Tim Pansino * Package Version Performance Regression (#970) * Fix package version performance regression * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Skip test in python 2 --------- Co-authored-by: Hannah Stepanek * Add new config setting max_attribute_value * Use new config setting * Add env var * Convert " " => * Use MAX_ATTRIBUTE_VALUE & cap at 4095 * Fixup: use min not max --------- Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Tim Pansino * Fixup " " -> * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: umaannamalai Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Tim Pansino --- newrelic/api/transaction.py | 4 +- newrelic/common/package_version_utils.py | 32 +++-- newrelic/config.py | 2 + newrelic/core/application.py | 8 +- newrelic/core/config.py | 15 +++ newrelic/core/custom_event.py | 60 +++++---- tests/agent_features/test_custom_events.py | 127 +++++++++++++----- .../test_package_version_utils.py | 41 ++++-- 8 files changed, 199 insertions(+), 90 deletions(-) diff --git a/newrelic/api/transaction.py b/newrelic/api/transaction.py index d6e960d5aa..fea2d28653 100644 --- a/newrelic/api/transaction.py +++ b/newrelic/api/transaction.py @@ -1640,7 +1640,7 @@ def record_custom_event(self, event_type, params): if not settings.custom_insights_events.enabled: return - event = create_custom_event(event_type, params) + event = create_custom_event(event_type, params, settings=settings) if event: self._custom_events.add(event, priority=self.priority) @@ -1653,7 +1653,7 @@ def record_ml_event(self, event_type, params): if not settings.ml_insights_events.enabled: return - event = create_custom_event(event_type, params, is_ml_event=True) + event = create_custom_event(event_type, params, settings=settings, is_ml_event=True) if event: self._ml_events.add(event, priority=self.priority) diff --git a/newrelic/common/package_version_utils.py b/newrelic/common/package_version_utils.py index 68320b897f..edefc4c0aa 100644 --- a/newrelic/common/package_version_utils.py +++ b/newrelic/common/package_version_utils.py @@ -13,6 +13,7 @@ # limitations under the License. import sys +import warnings try: from functools import cache as _cache_package_versions @@ -110,6 +111,23 @@ def _get_package_version(name): module = sys.modules.get(name, None) version = None + with warnings.catch_warnings(record=True): + for attr in VERSION_ATTRS: + try: + version = getattr(module, attr, None) + + # In certain cases like importlib_metadata.version, version is a callable + # function. + if callable(version): + continue + + # Cast any version specified as a list into a tuple. + version = tuple(version) if isinstance(version, list) else version + if version not in NULL_VERSIONS: + return version + except Exception: + pass + # importlib was introduced into the standard library starting in Python3.8. if "importlib" in sys.modules and hasattr(sys.modules["importlib"], "metadata"): try: @@ -126,20 +144,6 @@ def _get_package_version(name): except Exception: pass - for attr in VERSION_ATTRS: - try: - version = getattr(module, attr, None) - # In certain cases like importlib_metadata.version, version is a callable - # function. - if callable(version): - continue - # Cast any version specified as a list into a tuple. - version = tuple(version) if isinstance(version, list) else version - if version not in NULL_VERSIONS: - return version - except Exception: - pass - if "pkg_resources" in sys.modules: try: version = sys.modules["pkg_resources"].get_distribution(name).version diff --git a/newrelic/config.py b/newrelic/config.py index 6fe19705f2..1725c4eedb 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -45,6 +45,7 @@ from newrelic.common.log_file import initialize_logging from newrelic.common.object_names import expand_builtin_exception_name from newrelic.core import trace_cache +from newrelic.core.attribute import MAX_ATTRIBUTE_LENGTH from newrelic.core.config import ( Settings, apply_config_setting, @@ -443,6 +444,7 @@ def _process_configuration(section): ) _process_setting(section, "custom_insights_events.enabled", "getboolean", None) _process_setting(section, "custom_insights_events.max_samples_stored", "getint", None) + _process_setting(section, "custom_insights_events.max_attribute_value", "getint", MAX_ATTRIBUTE_LENGTH) _process_setting(section, "ml_insights_events.enabled", "getboolean", None) _process_setting(section, "distributed_tracing.enabled", "getboolean", None) _process_setting(section, "distributed_tracing.exclude_newrelic_header", "getboolean", None) diff --git a/newrelic/core/application.py b/newrelic/core/application.py index c681bc3f01..e1ada60aac 100644 --- a/newrelic/core/application.py +++ b/newrelic/core/application.py @@ -916,7 +916,7 @@ def record_custom_event(self, event_type, params): if settings is None or not settings.custom_insights_events.enabled: return - event = create_custom_event(event_type, params) + event = create_custom_event(event_type, params, settings=settings) if event: with self._stats_custom_lock: @@ -932,7 +932,7 @@ def record_ml_event(self, event_type, params): if settings is None or not settings.ml_insights_events.enabled: return - event = create_custom_event(event_type, params, is_ml_event=True) + event = create_custom_event(event_type, params, settings=settings, is_ml_event=True) if event: with self._stats_custom_lock: @@ -1506,7 +1506,9 @@ def harvest(self, shutdown=False, flexible=False): # Send metrics self._active_session.send_metric_data(self._period_start, period_end, metric_data) if dimensional_metric_data: - self._active_session.send_dimensional_metric_data(self._period_start, period_end, dimensional_metric_data) + self._active_session.send_dimensional_metric_data( + self._period_start, period_end, dimensional_metric_data + ) _logger.debug("Done sending data for harvest of %r.", self._app_name) diff --git a/newrelic/core/config.py b/newrelic/core/config.py index 483e23df80..27eb085b13 100644 --- a/newrelic/core/config.py +++ b/newrelic/core/config.py @@ -31,6 +31,7 @@ import newrelic.packages.six as six from newrelic.common.object_names import parse_exc_info +from newrelic.core.attribute import MAX_ATTRIBUTE_LENGTH from newrelic.core.attribute_filter import AttributeFilter try: @@ -717,6 +718,7 @@ def default_otlp_host(host): _settings.transaction_events.attributes.include = [] _settings.custom_insights_events.enabled = True +_settings.custom_insights_events.max_attribute_value = MAX_ATTRIBUTE_LENGTH _settings.ml_insights_events.enabled = False _settings.distributed_tracing.enabled = _environ_as_bool("NEW_RELIC_DISTRIBUTED_TRACING_ENABLED", default=True) @@ -810,6 +812,10 @@ def default_otlp_host(host): "NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_SAMPLES_STORED", CUSTOM_EVENT_RESERVOIR_SIZE ) +_settings.custom_insights_events.max_attribute_value = _environ_as_int( + "NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_ATTRIBUTE_VALUE", MAX_ATTRIBUTE_LENGTH +) + _settings.event_harvest_config.harvest_limits.ml_event_data = _environ_as_int( "NEW_RELIC_ML_INSIGHTS_EVENTS_MAX_SAMPLES_STORED", ML_EVENT_RESERVOIR_SIZE ) @@ -898,6 +904,7 @@ def default_otlp_host(host): _settings.machine_learning.inference_events_value.enabled = _environ_as_bool( "NEW_RELIC_MACHINE_LEARNING_INFERENCE_EVENT_VALUE_ENABLED", default=False ) +_settings.ml_insights_events.enabled = _environ_as_bool("NEW_RELIC_ML_INSIGHTS_EVENTS_ENABLED", default=False) def global_settings(): @@ -1170,6 +1177,14 @@ def apply_server_side_settings(server_side_config=None, settings=_settings): settings_snapshot.event_harvest_config.harvest_limits.ml_event_data / 12, ) + # Since the server does not override this setting we must override it here manually + # by caping it at the max value of 4095. + apply_config_setting( + settings_snapshot, + "custom_insights_events.max_attribute_value", + min(settings_snapshot.custom_insights_events.max_attribute_value, 4095), + ) + # This will be removed at some future point # Special case for account_id which will be sent instead of # cross_process_id in the future diff --git a/newrelic/core/custom_event.py b/newrelic/core/custom_event.py index b86dc25998..15becf437c 100644 --- a/newrelic/core/custom_event.py +++ b/newrelic/core/custom_event.py @@ -11,27 +11,37 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - import logging import re import time -from newrelic.core.attribute import (check_name_is_string, check_name_length, - process_user_attribute, NameIsNotStringException, NameTooLongException, - MAX_NUM_USER_ATTRIBUTES, MAX_ML_ATTRIBUTE_LENGTH, MAX_NUM_ML_USER_ATTRIBUTES, MAX_ATTRIBUTE_LENGTH) - +from newrelic.core.attribute import ( + MAX_ML_ATTRIBUTE_LENGTH, + MAX_NUM_ML_USER_ATTRIBUTES, + MAX_NUM_USER_ATTRIBUTES, + NameIsNotStringException, + NameTooLongException, + check_name_is_string, + check_name_length, + process_user_attribute, +) +from newrelic.core.config import global_settings _logger = logging.getLogger(__name__) -EVENT_TYPE_VALID_CHARS_REGEX = re.compile(r'^[a-zA-Z0-9:_ ]+$') +EVENT_TYPE_VALID_CHARS_REGEX = re.compile(r"^[a-zA-Z0-9:_ ]+$") + + +class NameInvalidCharactersException(Exception): + pass -class NameInvalidCharactersException(Exception): pass def check_event_type_valid_chars(name): regex = EVENT_TYPE_VALID_CHARS_REGEX if not regex.match(name): raise NameInvalidCharactersException() + def process_event_type(name): """Perform all necessary validation on a potential event type. @@ -55,25 +65,22 @@ def process_event_type(name): check_event_type_valid_chars(name) except NameIsNotStringException: - _logger.debug('Event type must be a string. Dropping ' - 'event: %r', name) + _logger.debug("Event type must be a string. Dropping event: %r", name) return FAILED_RESULT except NameTooLongException: - _logger.debug('Event type exceeds maximum length. Dropping ' - 'event: %r', name) + _logger.debug("Event type exceeds maximum length. Dropping event: %r", name) return FAILED_RESULT except NameInvalidCharactersException: - _logger.debug('Event type has invalid characters. Dropping ' - 'event: %r', name) + _logger.debug("Event type has invalid characters. Dropping event: %r", name) return FAILED_RESULT else: return name -def create_custom_event(event_type, params, is_ml_event=False): +def create_custom_event(event_type, params, settings=None, is_ml_event=False): """Creates a valid custom event. Ensures that the custom event has a valid name, and also checks @@ -84,6 +91,7 @@ def create_custom_event(event_type, params, is_ml_event=False): Args: event_type (str): The type (name) of the custom event. params (dict): Attributes to add to the event. + settings: Optional config settings. is_ml_event (bool): Boolean indicating whether create_custom_event was called from record_ml_event for truncation purposes @@ -92,6 +100,7 @@ def create_custom_event(event_type, params, is_ml_event=False): None, if not successful. """ + settings = settings or global_settings() name = process_event_type(event_type) @@ -106,25 +115,30 @@ def create_custom_event(event_type, params, is_ml_event=False): max_length = MAX_ML_ATTRIBUTE_LENGTH max_num_attrs = MAX_NUM_ML_USER_ATTRIBUTES else: - max_length = MAX_ATTRIBUTE_LENGTH + max_length = settings.custom_insights_events.max_attribute_value max_num_attrs = MAX_NUM_USER_ATTRIBUTES key, value = process_user_attribute(k, v, max_length=max_length) if key: if len(attributes) >= max_num_attrs: - _logger.debug('Maximum number of attributes already ' - 'added to event %r. Dropping attribute: %r=%r', - name, key, value) + _logger.debug( + "Maximum number of attributes already added to event %r. Dropping attribute: %r=%r", + name, + key, + value, + ) else: attributes[key] = value except Exception: - _logger.debug('Attributes failed to validate for unknown reason. ' - 'Check traceback for clues. Dropping event: %r.', name, - exc_info=True) + _logger.debug( + "Attributes failed to validate for unknown reason. Check traceback for clues. Dropping event: %r.", + name, + exc_info=True, + ) return None intrinsics = { - 'type': name, - 'timestamp': int(1000.0 * time.time()), + "type": name, + "timestamp": int(1000.0 * time.time()), } event = [intrinsics, attributes] diff --git a/tests/agent_features/test_custom_events.py b/tests/agent_features/test_custom_events.py index d03feea291..0fb9c80bc2 100644 --- a/tests/agent_features/test_custom_events.py +++ b/tests/agent_features/test_custom_events.py @@ -14,128 +14,183 @@ import time +from testing_support.fixtures import ( + function_not_called, + override_application_settings, + reset_core_stats_engine, + validate_custom_event_count, + validate_custom_event_in_application_stats_engine, +) + from newrelic.api.application import application_instance as application from newrelic.api.background_task import background_task from newrelic.api.transaction import record_custom_event from newrelic.core.custom_event import process_event_type -from testing_support.fixtures import (reset_core_stats_engine, - validate_custom_event_count, - validate_custom_event_in_application_stats_engine, - override_application_settings, function_not_called) - # Test process_event_type() + def test_process_event_type_name_is_string(): - name = 'string' + name = "string" assert process_event_type(name) == name + def test_process_event_type_name_is_not_string(): name = 42 assert process_event_type(name) is None + def test_process_event_type_name_ok_length(): - ok_name = 'CustomEventType' + ok_name = "CustomEventType" assert process_event_type(ok_name) == ok_name + def test_process_event_type_name_too_long(): - too_long = 'a' * 256 + too_long = "a" * 256 assert process_event_type(too_long) is None + def test_process_event_type_name_valid_chars(): - valid_name = 'az09: ' + valid_name = "az09: " assert process_event_type(valid_name) == valid_name + def test_process_event_type_name_invalid_chars(): - invalid_name = '&' + invalid_name = "&" assert process_event_type(invalid_name) is None + _now = time.time() _intrinsics = { - 'type': 'FooEvent', - 'timestamp': _now, + "type": "FooEvent", + "timestamp": _now, } -_user_params = {'foo': 'bar'} +_user_params = {"foo": "bar"} _event = [_intrinsics, _user_params] + @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event) @background_task() def test_add_custom_event_to_transaction_stats_engine(): - record_custom_event('FooEvent', _user_params) + record_custom_event("FooEvent", _user_params) + @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event) def test_add_custom_event_to_application_stats_engine(): app = application() - record_custom_event('FooEvent', _user_params, application=app) + record_custom_event("FooEvent", _user_params, application=app) + @reset_core_stats_engine() @validate_custom_event_count(count=0) @background_task() def test_custom_event_inside_transaction_bad_event_type(): - record_custom_event('!@#$%^&*()', {'foo': 'bar'}) + record_custom_event("!@#$%^&*()", {"foo": "bar"}) + @reset_core_stats_engine() @validate_custom_event_count(count=0) @background_task() def test_custom_event_outside_transaction_bad_event_type(): app = application() - record_custom_event('!@#$%^&*()', {'foo': 'bar'}, application=app) + record_custom_event("!@#$%^&*()", {"foo": "bar"}, application=app) + + +_mixed_params = {"foo": "bar", 123: "bad key"} -_mixed_params = {'foo': 'bar', 123: 'bad key'} @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event) @background_task() def test_custom_event_inside_transaction_mixed_params(): - record_custom_event('FooEvent', _mixed_params) + record_custom_event("FooEvent", _mixed_params) + + +@override_application_settings({"custom_insights_events.max_attribute_value": 4095}) +@reset_core_stats_engine() +@validate_custom_event_in_application_stats_engine([_intrinsics, {"foo": "bar", "bar": "a" * 4095}]) +@background_task() +def test_custom_event_inside_transaction_max_attribute_value(): + record_custom_event("FooEvent", {"foo": "bar", 123: "bad key", "bar": "a" * 5000}) + + +@reset_core_stats_engine() +@validate_custom_event_in_application_stats_engine([_intrinsics, {"foo": "bar", "bar": "a" * 255}]) +@background_task() +def test_custom_event_inside_transaction_default_attribute_value(): + record_custom_event("FooEvent", {"foo": "bar", 123: "bad key", "bar": "a" * 5000}) + + +@override_application_settings({"custom_insights_events.max_attribute_value": 4095}) +@reset_core_stats_engine() +@validate_custom_event_in_application_stats_engine([_intrinsics, {"foo": "bar", "bar": "a" * 4095}]) +def test_custom_event_outside_transaction_max_attribute_value(): + app = application() + record_custom_event("FooEvent", {"foo": "bar", 123: "bad key", "bar": "a" * 5000}, application=app) + + +@reset_core_stats_engine() +@validate_custom_event_in_application_stats_engine([_intrinsics, {"foo": "bar", "bar": "a" * 255}]) +def test_custom_event_outside_transaction_default_attribute_value(): + app = application() + record_custom_event("FooEvent", {"foo": "bar", 123: "bad key", "bar": "a" * 5000}, application=app) + @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event) @background_task() def test_custom_event_outside_transaction_mixed_params(): app = application() - record_custom_event('FooEvent', _mixed_params, application=app) + record_custom_event("FooEvent", _mixed_params, application=app) + + +_bad_params = {"*" * 256: "too long", 123: "bad key"} +_event_with_no_params = [{"type": "FooEvent", "timestamp": _now}, {}] -_bad_params = {'*' * 256: 'too long', 123: 'bad key'} -_event_with_no_params = [{'type': 'FooEvent', 'timestamp': _now}, {}] @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event_with_no_params) @background_task() def test_custom_event_inside_transaction_bad_params(): - record_custom_event('FooEvent', _bad_params) + record_custom_event("FooEvent", _bad_params) + @reset_core_stats_engine() @validate_custom_event_in_application_stats_engine(_event_with_no_params) @background_task() def test_custom_event_outside_transaction_bad_params(): app = application() - record_custom_event('FooEvent', _bad_params, application=app) + record_custom_event("FooEvent", _bad_params, application=app) + @reset_core_stats_engine() @validate_custom_event_count(count=0) @background_task() def test_custom_event_params_not_a_dict(): - record_custom_event('ParamsListEvent', ['not', 'a', 'dict']) + record_custom_event("ParamsListEvent", ["not", "a", "dict"]) + # Tests for Custom Events configuration settings -@override_application_settings({'collect_custom_events': False}) + +@override_application_settings({"collect_custom_events": False}) @reset_core_stats_engine() @validate_custom_event_count(count=0) @background_task() def test_custom_event_settings_check_collector_flag(): - record_custom_event('FooEvent', _user_params) + record_custom_event("FooEvent", _user_params) + -@override_application_settings({'custom_insights_events.enabled': False}) +@override_application_settings({"custom_insights_events.enabled": False}) @reset_core_stats_engine() @validate_custom_event_count(count=0) @background_task() def test_custom_event_settings_check_custom_insights_enabled(): - record_custom_event('FooEvent', _user_params) + record_custom_event("FooEvent", _user_params) + # Test that record_custom_event() methods will short-circuit. # @@ -143,15 +198,17 @@ def test_custom_event_settings_check_custom_insights_enabled(): # `create_custom_event()` function is not called, in order to avoid the # event_type and attribute processing. -@override_application_settings({'custom_insights_events.enabled': False}) -@function_not_called('newrelic.api.transaction', 'create_custom_event') + +@override_application_settings({"custom_insights_events.enabled": False}) +@function_not_called("newrelic.api.transaction", "create_custom_event") @background_task() def test_transaction_create_custom_event_not_called(): - record_custom_event('FooEvent', _user_params) + record_custom_event("FooEvent", _user_params) + -@override_application_settings({'custom_insights_events.enabled': False}) -@function_not_called('newrelic.core.application', 'create_custom_event') +@override_application_settings({"custom_insights_events.enabled": False}) +@function_not_called("newrelic.core.application", "create_custom_event") @background_task() def test_application_create_custom_event_not_called(): app = application() - record_custom_event('FooEvent', _user_params, application=app) + record_custom_event("FooEvent", _user_params, application=app) diff --git a/tests/agent_unittests/test_package_version_utils.py b/tests/agent_unittests/test_package_version_utils.py index 5ed689ea2a..b57c91aa60 100644 --- a/tests/agent_unittests/test_package_version_utils.py +++ b/tests/agent_unittests/test_package_version_utils.py @@ -13,8 +13,10 @@ # limitations under the License. import sys +import warnings import pytest +import six from testing_support.validators.validate_function_called import validate_function_called from newrelic.common.package_version_utils import ( @@ -66,30 +68,26 @@ def cleared_package_version_cache(): ("version_tuple", [3, 1, "0b2"], "3.1.0b2"), ), ) -def test_get_package_version(attr, value, expected_value): +def test_get_package_version(monkeypatch, attr, value, expected_value): # There is no file/module here, so we monkeypatch # pytest instead for our purposes - setattr(pytest, attr, value) + monkeypatch.setattr(pytest, attr, value, raising=False) version = get_package_version("pytest") assert version == expected_value - delattr(pytest, attr) # This test only works on Python 3.7 @SKIP_IF_IMPORTLIB_METADATA -def test_skips_version_callables(): +def test_skips_version_callables(monkeypatch): # There is no file/module here, so we monkeypatch # pytest instead for our purposes - setattr(pytest, "version", lambda x: "1.2.3.4") - setattr(pytest, "version_tuple", [3, 1, "0b2"]) + monkeypatch.setattr(pytest, "version", lambda x: "1.2.3.4", raising=False) + monkeypatch.setattr(pytest, "version_tuple", [3, 1, "0b2"], raising=False) version = get_package_version("pytest") assert version == "3.1.0b2" - delattr(pytest, "version") - delattr(pytest, "version_tuple") - # This test only works on Python 3.7 @SKIP_IF_IMPORTLIB_METADATA @@ -102,13 +100,12 @@ def test_skips_version_callables(): ("version_tuple", [3, 1, "0b2"], (3, 1, "0b2")), ), ) -def test_get_package_version_tuple(attr, value, expected_value): +def test_get_package_version_tuple(monkeypatch, attr, value, expected_value): # There is no file/module here, so we monkeypatch # pytest instead for our purposes - setattr(pytest, attr, value) + monkeypatch.setattr(pytest, attr, value, raising=False) version = get_package_version_tuple("pytest") assert version == expected_value - delattr(pytest, attr) @SKIP_IF_NOT_IMPORTLIB_METADATA @@ -132,10 +129,28 @@ def test_pkg_resources_metadata(): assert version not in NULL_VERSIONS, version +def _getattr_deprecation_warning(attr): + if attr == "__version__": + warnings.warn("Testing deprecation warnings.", DeprecationWarning) + return "3.2.1" + else: + raise NotImplementedError() + + +@pytest.mark.skipif(six.PY2, reason="Can't add Deprecation in __version__ in Python 2.") +def test_deprecation_warning_suppression(monkeypatch, recwarn): + # Add fake module to be deleted later + monkeypatch.setattr(pytest, "__getattr__", _getattr_deprecation_warning, raising=False) + + assert get_package_version("pytest") == "3.2.1" + + assert not recwarn.list, "Warnings not suppressed." + + def test_version_caching(monkeypatch): # Add fake module to be deleted later sys.modules["mymodule"] = sys.modules["pytest"] - setattr(pytest, "__version__", "1.0.0") + monkeypatch.setattr(pytest, "__version__", "1.0.0", raising=False) version = get_package_version("mymodule") assert version not in NULL_VERSIONS, version From 21730dbe263a59239d6ce6700964c2d5601ff9f1 Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Tue, 14 Nov 2023 17:03:14 -0800 Subject: [PATCH 003/199] Switch AI instrumentation to use custom events (#974) * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] * Update OpenAI testing infra to match bedrock (#939) * Add OpenAI sync chat completion instrumentation (#934) * Add openai sync instrumentation. * Remove commented code. * Test cleanup. * Add request/ response IDs. * Fixups. * Add conversation ID to message events. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add OpenAI sync embedding instrumentation (#938) * Add sync instrumentation for OpenAI embeddings. * Remove comments. * Clean up embedding event dictionary. * Update response_time to duration. * Linting fixes. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek * Instrument acreate's for open-ai (#935) * Instrument acreate's for open ai async * Remove duplicated vendor * Re-use expected & input payloads in tests * Attach ml_event to APM entity by default (#940) * Attach non InferenceEvents to APM entity * Validate both resource payloads * Add tests for non-inference events * Add OpenAI sync embedding instrumentation (#938) * Add sync instrumentation for OpenAI embeddings. * Remove comments. * Clean up embedding event dictionary. * Update response_time to duration. * Linting fixes. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek * Fixup: test names --------- Co-authored-by: Uma Annamalai Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add truncation for ML events. (#943) * Add 4096 char truncation for ML events. * Add max attr check. * Fixup. * Fix character length ml event test. * Ignore test_ml_events.py for Py2. * Cleanup custom event if checks. * Add import statement. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add framework metric for OpenAI. (#945) * Add framework metric for OpenAI. * [Mega-Linter] Apply linters fixes * Trigger tests * Fix missing version info. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add truncation support for ML events recorded outside txns. (#949) * Add ml tests for outside transaction. * Update validator. * Add ML flag to application code path for record_ml_event. * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Mock openai error responses (#950) * Add example tests and mock error responses * Set invalid api key in auth error test Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * OpenAI ErrorTrace attributes (#941) * Add openai sync instrumentation. * Remove commented code. * Initial openai error commit * Add example tests and mock error responses * Changes to attribute collection * Change error tests to match mock server * [Mega-Linter] Apply linters fixes * Trigger tests * Add dt_enabled decorator to error tests * Add embedded and async error tests * [Mega-Linter] Apply linters fixes * Trigger tests * Add http.statusCode to span before notice_error call * Report number of messages in error trace even if 0 * Revert notice_error and add _nr_message attr * Remove enabled_ml_settings as not needed * Add stats engine _nr_message test * [Mega-Linter] Apply linters fixes * Trigger tests * Revert black formatting in unicode/byte messages --------- Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek Co-authored-by: lrafeei Co-authored-by: hmstepanek * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Pin openai tests to below 1.0 (#962) * Pin openai below 1.0 * Fixup * Add openai feedback support (#942) * Add get_ai_message_ids & message id capturing * Add tests * Remove generator * Add tests for conversation id unset * Add error code to mocked responses * Remove bedrock tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add ingest source to openai events (#961) * Pin openai below 1.0 * Fixup * Add ingest_source to events * Remove duplicate test file * Handle 0.32.0.post1 version in tests (#963) --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Handle 0.32.0.post1 version in tests (#963) * Initial merge commit * Update moto * Test for Bedrock embeddings metrics * Add record_llm_feedback_event API (#964) * Implement record_ai_feedback API. * [Mega-Linter] Apply linters fixes * Change API name to record_ai_feedback_event. * Fix API naming. * Rename to record_llm_feedback_event and get_llm_message_ids. * [Mega-Linter] Apply linters fixes * Address review feedback. * Update test structure. * [Mega-Linter] Apply linters fixes * Bump tests. --------- Co-authored-by: umaannamalai * Bedrock Error Tracing (#966) * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Drop python 3.7 tests for Hypercorn (#954) * Fix pyenv installation for devcontainer (#936) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove duplicate kafka import hook (#956) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Initial bedrock error tracing commit * Handle 0.32.0.post1 version in tests (#963) * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Fix expected chat completion tests * Remove commented out code * Switch openai to use custom events. * Cleanup. * Switch Bedrock instrumentation to custom events. * Fix record_feedback test. * Fix disabled settings. * Add attribute length setting to bedrock conftest. --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: lrafeei Co-authored-by: hmstepanek Co-authored-by: Tim Pansino --- newrelic/api/ml_model.py | 2 +- newrelic/hooks/external_botocore.py | 6 +- newrelic/hooks/mlmodel_openai.py | 16 ++- .../test_record_llm_feedback_event.py | 14 +-- tests/external_botocore/conftest.py | 2 +- .../test_bedrock_chat_completion.py | 20 ++-- .../test_bedrock_embeddings.py | 16 +-- tests/mlmodel_openai/test_chat_completion.py | 42 +++---- tests/mlmodel_openai/test_embeddings.py | 35 +++--- .../test_get_llm_message_ids.py | 8 +- .../validators/validate_custom_events.py | 109 ++++++++++++++++++ tox.ini | 2 +- 12 files changed, 191 insertions(+), 81 deletions(-) create mode 100644 tests/testing_support/validators/validate_custom_events.py diff --git a/newrelic/api/ml_model.py b/newrelic/api/ml_model.py index d01042b359..1951f91312 100644 --- a/newrelic/api/ml_model.py +++ b/newrelic/api/ml_model.py @@ -81,4 +81,4 @@ def record_llm_feedback_event( } feedback_message_event.update(metadata) - transaction.record_ml_event("LlmFeedbackMessage", feedback_message_event) + transaction.record_custom_event("LlmFeedbackMessage", feedback_message_event) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index c075f0874d..8fd9c05e09 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -119,7 +119,7 @@ def create_chat_completion_message_event( "vendor": "bedrock", "ingest_source": "Python", } - transaction.record_ml_event("LlmChatCompletionMessage", chat_completion_message_dict) + transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) def extract_bedrock_titan_text_model(request_body, response_body=None): @@ -376,7 +376,7 @@ def handle_embedding_event( } ) - transaction.record_ml_event("LlmEmbedding", embedding_dict) + transaction.record_custom_event("LlmEmbedding", embedding_dict) def handle_chat_completion_event( @@ -413,7 +413,7 @@ def handle_chat_completion_event( } ) - transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) create_chat_completion_message_event( transaction=transaction, diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index a51d8aae87..a53f2fc623 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -122,7 +122,8 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): "ingest_source": "Python", } - transaction.record_ml_event("LlmEmbedding", embedding_dict) + transaction.record_custom_event("LlmEmbedding", embedding_dict) + return response @@ -213,7 +214,8 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): "response.number_of_messages": len(messages) + len(choices), } - transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) + message_list = list(messages) if choices: message_list.extend([choices[0].message]) @@ -287,7 +289,9 @@ def create_chat_completion_message_event( "vendor": "openAI", "ingest_source": "Python", } - transaction.record_ml_event("LlmChatCompletionMessage", chat_completion_message_dict) + + transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) + return (conversation_id, request_id, message_ids) @@ -368,7 +372,8 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): "ingest_source": "Python", } - transaction.record_ml_event("LlmEmbedding", embedding_dict) + transaction.record_custom_event("LlmEmbedding", embedding_dict) + return response @@ -465,7 +470,8 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): "ingest_source": "Python", } - transaction.record_ml_event("LlmChatCompletionSummary", chat_completion_summary_dict) + transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) + message_list = list(messages) if choices: message_list.extend([choices[0].message]) diff --git a/tests/agent_features/test_record_llm_feedback_event.py b/tests/agent_features/test_record_llm_feedback_event.py index 59921ff400..c9489c050e 100644 --- a/tests/agent_features/test_record_llm_feedback_event.py +++ b/tests/agent_features/test_record_llm_feedback_event.py @@ -12,10 +12,8 @@ # See the License for the specific language governing permissions and # limitations under the License. -from testing_support.fixtures import reset_core_stats_engine -from testing_support.validators.validate_ml_event_count import validate_ml_event_count -from testing_support.validators.validate_ml_events import validate_ml_events - +from testing_support.fixtures import reset_core_stats_engine, validate_custom_event_count +from testing_support.validators.validate_custom_events import validate_custom_events from newrelic.api.background_task import background_task from newrelic.api.ml_model import record_llm_feedback_event @@ -38,8 +36,8 @@ def test_record_llm_feedback_event_all_args_supplied(): }, ), ] - - @validate_ml_events(llm_feedback_all_args_recorded_events) + + @validate_custom_events(llm_feedback_all_args_recorded_events) @background_task() def _test(): record_llm_feedback_event( @@ -73,7 +71,7 @@ def test_record_llm_feedback_event_required_args_supplied(): ), ] - @validate_ml_events(llm_feedback_required_args_recorded_events) + @validate_custom_events(llm_feedback_required_args_recorded_events) @background_task() def _test(): record_llm_feedback_event(message_id="message_id", rating="Good") @@ -82,7 +80,7 @@ def _test(): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_record_llm_feedback_event_outside_txn(): record_llm_feedback_event( rating="Good", diff --git a/tests/external_botocore/conftest.py b/tests/external_botocore/conftest.py index 6dbf20ef42..c992726b3e 100644 --- a/tests/external_botocore/conftest.py +++ b/tests/external_botocore/conftest.py @@ -42,7 +42,7 @@ "transaction_tracer.stack_trace_threshold": 0.0, "debug.log_data_collector_payloads": True, "debug.record_transaction_failure": True, - "ml_insights_events.enabled": True, + "custom_insights_events.max_attribute_value": 4096 } collector_agent_registration = collector_agent_registration_fixture( app_name="Python Agent Test (external_botocore)", diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index 4f32a92ac6..18578a887f 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -28,12 +28,12 @@ dt_enabled, override_application_settings, reset_core_stats_engine, + validate_custom_event_count, ) from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) -from testing_support.validators.validate_ml_event_count import validate_ml_event_count -from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) @@ -106,9 +106,9 @@ def expected_client_error(model_id): # not working with claude @reset_core_stats_engine() def test_bedrock_chat_completion_in_txn_with_convo_id(set_trace_info, exercise_model, expected_events): - @validate_ml_events(expected_events) + @validate_custom_events(expected_events) # One summary event, one user message, and one response message from the assistant - @validate_ml_event_count(count=3) + @validate_custom_event_count(count=3) @validate_transaction_metrics( name="test_bedrock_chat_completion_in_txn_with_convo_id", custom_metrics=[ @@ -128,9 +128,9 @@ def _test(): # not working with claude @reset_core_stats_engine() def test_bedrock_chat_completion_in_txn_no_convo_id(set_trace_info, exercise_model, expected_events_no_convo_id): - @validate_ml_events(expected_events_no_convo_id) + @validate_custom_events(expected_events_no_convo_id) # One summary event, one user message, and one response message from the assistant - @validate_ml_event_count(count=3) + @validate_custom_event_count(count=3) @validate_transaction_metrics( name="test_bedrock_chat_completion_in_txn_no_convo_id", custom_metrics=[ @@ -147,19 +147,19 @@ def _test(): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_bedrock_chat_completion_outside_txn(set_trace_info, exercise_model): set_trace_info() add_custom_attribute("conversation_id", "my-awesome-id") exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) -disabled_ml_settings = {"machine_learning.enabled": False, "ml_insights_events.enabled": False} +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} -@override_application_settings(disabled_ml_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( name="test_bedrock_chat_completion_disabled_settings", custom_metrics=[ diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py index db985ee467..788e4ec867 100644 --- a/tests/external_botocore/test_bedrock_embeddings.py +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -27,12 +27,12 @@ dt_enabled, override_application_settings, reset_core_stats_engine, + validate_custom_event_count ) from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) -from testing_support.validators.validate_ml_event_count import validate_ml_event_count -from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) @@ -40,7 +40,7 @@ from newrelic.api.background_task import background_task from newrelic.common.object_names import callable_name -disabled_ml_insights_settings = {"ml_insights_events.enabled": False} +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} @pytest.fixture(scope="session", params=[False, True], ids=["Bytes", "Stream"]) @@ -92,8 +92,8 @@ def expected_client_error(model_id): @reset_core_stats_engine() def test_bedrock_embedding(set_trace_info, exercise_model, expected_events): - @validate_ml_events(expected_events) - @validate_ml_event_count(count=1) + @validate_custom_events(expected_events) + @validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_bedrock_embedding", custom_metrics=[ @@ -110,7 +110,7 @@ def _test(): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_bedrock_embedding_outside_txn(exercise_model): exercise_model(prompt="This is an embedding test.") @@ -119,9 +119,9 @@ def test_bedrock_embedding_outside_txn(exercise_model): _client_error_name = callable_name(_client_error) -@override_application_settings(disabled_ml_insights_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( name="test_bedrock_embeddings:test_bedrock_embedding_disabled_settings", custom_metrics=[ diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index 6f3762a826..c864e4f030 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -16,9 +16,9 @@ from testing_support.fixtures import ( override_application_settings, reset_core_stats_engine, + validate_custom_event_count, ) -from testing_support.validators.validate_ml_event_count import validate_ml_event_count -from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) @@ -26,7 +26,7 @@ from newrelic.api.background_task import background_task from newrelic.api.transaction import add_custom_attribute -disabled_ml_insights_settings = {"ml_insights_events.enabled": False} +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} _test_openai_chat_completion_messages = ( {"role": "system", "content": "You are a scientist."}, @@ -129,9 +129,9 @@ @reset_core_stats_engine() -@validate_ml_events(chat_completion_recorded_events) +@validate_custom_events(chat_completion_recorded_events) # One summary event, one system message, one user message, and one response message from the assistant -@validate_ml_event_count(count=4) +@validate_custom_event_count(count=4) @validate_transaction_metrics( name="test_chat_completion:test_openai_chat_completion_sync_in_txn_with_convo_id", custom_metrics=[ @@ -244,9 +244,9 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): @reset_core_stats_engine() -@validate_ml_events(chat_completion_recorded_events_no_convo_id) +@validate_custom_events(chat_completion_recorded_events_no_convo_id) # One summary event, one system message, one user message, and one response message from the assistant -@validate_ml_event_count(count=4) +@validate_custom_event_count(count=4) @background_task() def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info): set_trace_info() @@ -256,7 +256,7 @@ def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_openai_chat_completion_sync_outside_txn(): add_custom_attribute("conversation_id", "my-awesome-id") openai.ChatCompletion.create( @@ -264,18 +264,18 @@ def test_openai_chat_completion_sync_outside_txn(): ) -@override_application_settings(disabled_ml_insights_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( - name="test_chat_completion:test_openai_chat_completion_sync_ml_insights_disabled", + name="test_chat_completion:test_openai_chat_completion_sync_custom_events_insights_disabled", custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], background_task=True, ) @background_task() -def test_openai_chat_completion_sync_ml_insights_disabled(set_trace_info): +def test_openai_chat_completion_sync_custom_events_insights_disabled(set_trace_info): set_trace_info() openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 @@ -283,8 +283,8 @@ def test_openai_chat_completion_sync_ml_insights_disabled(set_trace_info): @reset_core_stats_engine() -@validate_ml_events(chat_completion_recorded_events_no_convo_id) -@validate_ml_event_count(count=4) +@validate_custom_events(chat_completion_recorded_events_no_convo_id) +@validate_custom_event_count(count=4) @background_task() def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info): set_trace_info() @@ -297,8 +297,8 @@ def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info @reset_core_stats_engine() -@validate_ml_events(chat_completion_recorded_events) -@validate_ml_event_count(count=4) +@validate_custom_events(chat_completion_recorded_events) +@validate_custom_event_count(count=4) @validate_transaction_metrics( name="test_chat_completion:test_openai_chat_completion_async_conversation_id_set", custom_metrics=[ @@ -319,7 +319,7 @@ def test_openai_chat_completion_async_conversation_id_set(loop, set_trace_info): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_openai_chat_completion_async_outside_transaction(loop): loop.run_until_complete( openai.ChatCompletion.acreate( @@ -328,18 +328,18 @@ def test_openai_chat_completion_async_outside_transaction(loop): ) -@override_application_settings(disabled_ml_insights_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( - name="test_chat_completion:test_openai_chat_completion_async_disabled_ml_settings", + name="test_chat_completion:test_openai_chat_completion_async_disabled_custom_event_settings", custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], background_task=True, ) @background_task() -def test_openai_chat_completion_async_disabled_ml_settings(loop): +def test_openai_chat_completion_async_disabled_custom_event_settings(loop): loop.run_until_complete( openai.ChatCompletion.acreate( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py index 180052b0de..38b51d23f9 100644 --- a/tests/mlmodel_openai/test_embeddings.py +++ b/tests/mlmodel_openai/test_embeddings.py @@ -16,17 +16,16 @@ from testing_support.fixtures import ( # override_application_settings, override_application_settings, reset_core_stats_engine, + validate_custom_event_count, ) -from testing_support.validators.validate_ml_event_count import validate_ml_event_count -from testing_support.validators.validate_ml_events import validate_ml_events +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) from newrelic.api.background_task import background_task -disabled_ml_insights_settings = {"ml_insights_events.enabled": False} - +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} embedding_recorded_events = [ ( @@ -62,8 +61,8 @@ @reset_core_stats_engine() -@validate_ml_events(embedding_recorded_events) -@validate_ml_event_count(count=1) +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_sync", custom_metrics=[ @@ -78,30 +77,30 @@ def test_openai_embedding_sync(set_trace_info): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_openai_embedding_sync_outside_txn(): openai.Embedding.create(input="This is an embedding test.", model="text-embedding-ada-002") -@override_application_settings(disabled_ml_insights_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( - name="test_embeddings:test_openai_chat_completion_sync_disabled_settings", + name="test_embeddings:test_openai_embedding_sync_disabled_settings", custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], background_task=True, ) @background_task() -def test_openai_chat_completion_sync_disabled_settings(set_trace_info): +def test_openai_embedding_sync_disabled_settings(set_trace_info): set_trace_info() openai.Embedding.create(input="This is an embedding test.", model="text-embedding-ada-002") @reset_core_stats_engine() -@validate_ml_events(embedding_recorded_events) -@validate_ml_event_count(count=1) +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_async", custom_metrics=[ @@ -119,25 +118,25 @@ def test_openai_embedding_async(loop, set_trace_info): @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) def test_openai_embedding_async_outside_transaction(loop): loop.run_until_complete( openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") ) -@override_application_settings(disabled_ml_insights_settings) +@override_application_settings(disabled_custom_insights_settings) @reset_core_stats_engine() -@validate_ml_event_count(count=0) +@validate_custom_event_count(count=0) @validate_transaction_metrics( - name="test_embeddings:test_openai_embedding_async_disabled_ml_insights_events", + name="test_embeddings:test_openai_embedding_async_disabled_custom_insights_events", custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], background_task=True, ) @background_task() -def test_openai_embedding_async_disabled_ml_insights_events(loop): +def test_openai_embedding_async_disabled_custom_insights_events(loop): loop.run_until_complete( openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") ) diff --git a/tests/mlmodel_openai/test_get_llm_message_ids.py b/tests/mlmodel_openai/test_get_llm_message_ids.py index e20245128e..af073f7300 100644 --- a/tests/mlmodel_openai/test_get_llm_message_ids.py +++ b/tests/mlmodel_openai/test_get_llm_message_ids.py @@ -13,12 +13,10 @@ # limitations under the License. import openai -from testing_support.fixtures import reset_core_stats_engine -from testing_support.validators.validate_ml_event_count import validate_ml_event_count - from newrelic.api.background_task import background_task from newrelic.api.ml_model import get_llm_message_ids, record_llm_feedback_event from newrelic.api.transaction import add_custom_attribute, current_transaction +from testing_support.fixtures import reset_core_stats_engine, validate_custom_event_count _test_openai_chat_completion_messages_1 = ( {"role": "system", "content": "You are a scientist."}, @@ -170,7 +168,7 @@ async def _run(): @reset_core_stats_engine() # Three chat completion messages and one chat completion summary for each create call (8 in total) # Three feedback events for the first create call -@validate_ml_event_count(11) +@validate_custom_event_count(11) @background_task() def test_get_llm_message_ids_mulitple_sync(set_trace_info): set_trace_info() @@ -203,7 +201,7 @@ def test_get_llm_message_ids_mulitple_sync(set_trace_info): @reset_core_stats_engine() -@validate_ml_event_count(11) +@validate_custom_event_count(11) @background_task() def test_get_llm_message_ids_mulitple_sync_no_conversation_id(set_trace_info): set_trace_info() diff --git a/tests/testing_support/validators/validate_custom_events.py b/tests/testing_support/validators/validate_custom_events.py new file mode 100644 index 0000000000..206ce08f1a --- /dev/null +++ b/tests/testing_support/validators/validate_custom_events.py @@ -0,0 +1,109 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import copy +import time + +from testing_support.fixtures import catch_background_exceptions + +from newrelic.common.object_wrapper import function_wrapper, transient_function_wrapper +from newrelic.packages import six + + +def validate_custom_events(events): + @function_wrapper + def _validate_wrapper(wrapped, instance, args, kwargs): + record_called = [] + recorded_events = [] + + @transient_function_wrapper("newrelic.core.stats_engine", "StatsEngine.record_transaction") + @catch_background_exceptions + def _validate_custom_events(wrapped, instance, args, kwargs): + record_called.append(True) + try: + result = wrapped(*args, **kwargs) + except: + raise + recorded_events[:] = [] + recorded_events.extend(list(instance._custom_events)) + + return result + + _new_wrapper = _validate_custom_events(wrapped) + val = _new_wrapper(*args, **kwargs) + assert record_called + found_events = copy.copy(recorded_events) + + record_called[:] = [] + recorded_events[:] = [] + + for expected in events: + matching_custom_events = 0 + mismatches = [] + for captured in found_events: + if _check_event_attributes(expected, captured, mismatches): + matching_custom_events += 1 + assert matching_custom_events == 1, _event_details(matching_custom_events, found_events, mismatches) + + return val + + return _validate_wrapper + + +def _check_event_attributes(expected, captured, mismatches): + assert len(captured) == 2 # [intrinsic, user attributes] + + intrinsics = captured[0] + + if intrinsics["type"] != expected[0]["type"]: + mismatches.append("key: type, value:<%s><%s>" % (expected[0]["type"], captured[0].get("type", None))) + return False + + now = time.time() + + if not (isinstance(intrinsics["timestamp"], int) and intrinsics["timestamp"] <= 1000.0 * now): + mismatches.append("key: timestamp, value:<%s>" % intrinsics["timestamp"]) + return False + + captured_keys = set(six.iterkeys(captured[1])) + expected_keys = set(six.iterkeys(expected[1])) + extra_keys = captured_keys - expected_keys + + if extra_keys: + mismatches.append("extra_keys: %s" % str(tuple(extra_keys))) + return False + + for key, value in six.iteritems(expected[1]): + if key in captured[1]: + captured_value = captured[1].get(key, None) + else: + mismatches.append("key: %s, value:<%s><%s>" % (key, value, captured[1].get(key, None))) + return False + + if value is not None: + if value != captured_value: + mismatches.append("key: %s, value:<%s><%s>" % (key, value, captured_value)) + return False + + return True + + +def _event_details(matching_custom_events, captured, mismatches): + details = [ + "matching_custom_events=%d" % matching_custom_events, + "mismatches=%s" % mismatches, + "captured_events=%s" % captured, + ] + + return "\n".join(details) diff --git a/tox.ini b/tox.ini index 25f602d455..0197fa5170 100644 --- a/tox.ini +++ b/tox.ini @@ -326,7 +326,7 @@ deps = framework_sanic-sanic{200904,210300,2109,2112,2203,2290}: websockets<11 ; For test_exception_in_middleware test, anyio is used: ; https://github.com/encode/starlette/pull/1157 - ; but anyiolatest creates breaking changes to our tests + ; but anyiolatest creates breaking changes to our tests ; (but not the instrumentation): ; https://github.com/agronholm/anyio/releases/tag/4.0.0 framework_starlette: anyio<4 From 7b7fa3ff9842a5df03483b76a53fb79a4ad2c55c Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 15 Nov 2023 10:29:42 -0800 Subject: [PATCH 004/199] Fix bug in max_attribute_value setting (#979) --- newrelic/config.py | 3 +-- newrelic/core/config.py | 9 ++++----- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/newrelic/config.py b/newrelic/config.py index 1725c4eedb..23e839a1e7 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -45,7 +45,6 @@ from newrelic.common.log_file import initialize_logging from newrelic.common.object_names import expand_builtin_exception_name from newrelic.core import trace_cache -from newrelic.core.attribute import MAX_ATTRIBUTE_LENGTH from newrelic.core.config import ( Settings, apply_config_setting, @@ -444,7 +443,7 @@ def _process_configuration(section): ) _process_setting(section, "custom_insights_events.enabled", "getboolean", None) _process_setting(section, "custom_insights_events.max_samples_stored", "getint", None) - _process_setting(section, "custom_insights_events.max_attribute_value", "getint", MAX_ATTRIBUTE_LENGTH) + _process_setting(section, "custom_insights_events.max_attribute_value", "getint", None) _process_setting(section, "ml_insights_events.enabled", "getboolean", None) _process_setting(section, "distributed_tracing.enabled", "getboolean", None) _process_setting(section, "distributed_tracing.exclude_newrelic_header", "getboolean", None) diff --git a/newrelic/core/config.py b/newrelic/core/config.py index 27eb085b13..2128483481 100644 --- a/newrelic/core/config.py +++ b/newrelic/core/config.py @@ -718,7 +718,10 @@ def default_otlp_host(host): _settings.transaction_events.attributes.include = [] _settings.custom_insights_events.enabled = True -_settings.custom_insights_events.max_attribute_value = MAX_ATTRIBUTE_LENGTH +_settings.custom_insights_events.max_attribute_value = _environ_as_int( + "NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_ATTRIBUTE_VALUE", default=MAX_ATTRIBUTE_LENGTH +) + _settings.ml_insights_events.enabled = False _settings.distributed_tracing.enabled = _environ_as_bool("NEW_RELIC_DISTRIBUTED_TRACING_ENABLED", default=True) @@ -812,10 +815,6 @@ def default_otlp_host(host): "NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_SAMPLES_STORED", CUSTOM_EVENT_RESERVOIR_SIZE ) -_settings.custom_insights_events.max_attribute_value = _environ_as_int( - "NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_ATTRIBUTE_VALUE", MAX_ATTRIBUTE_LENGTH -) - _settings.event_harvest_config.harvest_limits.ml_event_data = _environ_as_int( "NEW_RELIC_ML_INSIGHTS_EVENTS_MAX_SAMPLES_STORED", ML_EVENT_RESERVOIR_SIZE ) From eafbd97755a6c787800bad767ded69bf53315c34 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 15 Nov 2023 16:06:54 -0800 Subject: [PATCH 005/199] Set transaction_id = guid not _transaction_id (#977) * Use guid instead of _transaction_id * Change _transaction_id to _identity & add comments * Trigger tests * Move id override into if transaction block --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> --- newrelic/api/transaction.py | 5 ++-- newrelic/hooks/external_botocore.py | 6 ++--- newrelic/hooks/mlmodel_openai.py | 10 ++++---- .../_test_bedrock_chat_completion.py | 24 +++++++++---------- .../_test_bedrock_embeddings.py | 6 ++--- tests/external_botocore/conftest.py | 3 ++- .../test_bedrock_chat_completion.py | 1 - tests/mlmodel_openai/conftest.py | 1 + tests/mlmodel_openai/test_chat_completion.py | 16 ++++++------- tests/mlmodel_openai/test_embeddings.py | 2 +- 10 files changed, 38 insertions(+), 36 deletions(-) diff --git a/newrelic/api/transaction.py b/newrelic/api/transaction.py index fea2d28653..643a5db597 100644 --- a/newrelic/api/transaction.py +++ b/newrelic/api/transaction.py @@ -174,7 +174,7 @@ def __init__(self, application, enabled=None, source=None): self.thread_id = None - self._transaction_id = id(self) + self._identity = id(self) self._transaction_lock = threading.Lock() self._dead = False @@ -273,6 +273,7 @@ def __init__(self, application, enabled=None, source=None): trace_id = "%032x" % random.getrandbits(128) # 16-digit random hex. Padded with zeros in the front. + # This is the official transactionId in the UI. self.guid = trace_id[:16] # 32-digit random hex. Padded with zeros in the front. @@ -413,7 +414,7 @@ def __exit__(self, exc, value, tb): if not self.enabled: return - if self._transaction_id != id(self): + if self._identity != id(self): return if not self._settings: diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 8fd9c05e09..72083b2abd 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -110,7 +110,7 @@ def create_chat_completion_message_event( "request_id": request_id, "span_id": span_id, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "content": message.get("content", ""), "role": message.get("role"), "completion_id": chat_completion_id, @@ -368,7 +368,7 @@ def handle_embedding_event( "span_id": span_id, "trace_id": trace_id, "request_id": request_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], "duration": duration, "request.model": model, @@ -405,7 +405,7 @@ def handle_chat_completion_event( "conversation_id": conversation_id, "span_id": span_id, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "request_id": request_id, "duration": duration, "request.model": model, diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index a53f2fc623..458b01cd6f 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -89,7 +89,7 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): "span_id": span_id, "trace_id": trace_id, "request_id": request_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "input": kwargs.get("input", ""), "api_key_last_four_digits": f"sk-{response.api_key[-4:]}", "duration": ft.duration, @@ -176,7 +176,7 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): "conversation_id": conversation_id, "span_id": span_id, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "request_id": request_id, "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", "duration": ft.duration, @@ -280,7 +280,7 @@ def create_chat_completion_message_event( "request_id": request_id, "span_id": span_id, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "content": message.get("content", ""), "role": message.get("role", ""), "completion_id": chat_completion_id, @@ -345,7 +345,7 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): "response.model": response.get("model", ""), "appName": settings.app_name, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "span_id": span_id, "response.usage.total_tokens": total_tokens, "response.usage.prompt_tokens": prompt_tokens, @@ -433,7 +433,7 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): "request_id": request_id, "span_id": span_id, "trace_id": trace_id, - "transaction_id": transaction._transaction_id, + "transaction_id": transaction.guid, "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", "duration": ft.duration, "request.model": kwargs.get("model") or kwargs.get("engine") or "", diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py index 9abdca83cf..5c91ade6c6 100644 --- a/tests/external_botocore/_test_bedrock_chat_completion.py +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -13,7 +13,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", @@ -41,7 +41,7 @@ "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", "role": "user", "completion_id": None, @@ -60,7 +60,7 @@ "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "\nUse the formula,\n°C = (°F - 32) x 5/9\n= 212 x 5/9\n= 100 degrees Celsius\n212 degrees Fahrenheit is 100 degrees Celsius.", "role": "assistant", "completion_id": None, @@ -78,7 +78,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", @@ -104,7 +104,7 @@ "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", "role": "user", "completion_id": None, @@ -123,7 +123,7 @@ "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "\n212 degrees Fahrenheit is equal to 100 degrees Celsius.", "role": "assistant", "completion_id": None, @@ -141,7 +141,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", @@ -166,7 +166,7 @@ "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "Human: What is 212 degrees Fahrenheit converted to Celsius? Assistant:", "role": "user", "completion_id": None, @@ -185,7 +185,7 @@ "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": " Okay, here are the conversion steps:\n212 degrees Fahrenheit\n- Subtract 32 from 212 to get 180 (to convert from Fahrenheit to Celsius scale)\n- Multiply by 5/9 (because the formula is °C = (°F - 32) × 5/9)\n- 180 × 5/9 = 100\n\nSo 212 degrees Fahrenheit converted to Celsius is 100 degrees Celsius.", "role": "assistant", "completion_id": None, @@ -203,7 +203,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", @@ -229,7 +229,7 @@ "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", "role": "user", "completion_id": None, @@ -248,7 +248,7 @@ "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": " 212°F is equivalent to 100°C. \n\nFahrenheit and Celsius are two temperature scales commonly used in everyday life. The Fahrenheit scale is based on 32°F for the freezing point of water and 212°F for the boiling point of water. On the other hand, the Celsius scale uses 0°C and 100°C as the freezing and boiling points of water, respectively. \n\nTo convert from Fahrenheit to Celsius, we subtract 32 from the Fahrenheit temperature and multiply the result", "role": "assistant", "completion_id": None, diff --git a/tests/external_botocore/_test_bedrock_embeddings.py b/tests/external_botocore/_test_bedrock_embeddings.py index 8fb2ceecee..2367f7af81 100644 --- a/tests/external_botocore/_test_bedrock_embeddings.py +++ b/tests/external_botocore/_test_bedrock_embeddings.py @@ -10,7 +10,7 @@ { "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "input": "This is an embedding test.", @@ -32,7 +32,7 @@ { "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "input": "This is an embedding test.", @@ -47,7 +47,7 @@ "ingest_source": "Python", }, ), - ] + ], } embedding_expected_client_errors = { diff --git a/tests/external_botocore/conftest.py b/tests/external_botocore/conftest.py index c992726b3e..38c2fb03d1 100644 --- a/tests/external_botocore/conftest.py +++ b/tests/external_botocore/conftest.py @@ -42,7 +42,7 @@ "transaction_tracer.stack_trace_threshold": 0.0, "debug.log_data_collector_payloads": True, "debug.record_transaction_failure": True, - "custom_insights_events.max_attribute_value": 4096 + "custom_insights_events.max_attribute_value": 4096, } collector_agent_registration = collector_agent_registration_fixture( app_name="Python Agent Test (external_botocore)", @@ -155,6 +155,7 @@ def set_trace_info(): def _set_trace_info(): txn = current_transaction() if txn: + txn.guid = "transaction-id" txn._trace_id = "trace-id" trace = current_trace() if trace: diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index 18578a887f..29489191f7 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -149,7 +149,6 @@ def _test(): @reset_core_stats_engine() @validate_custom_event_count(count=0) def test_bedrock_chat_completion_outside_txn(set_trace_info, exercise_model): - set_trace_info() add_custom_attribute("conversation_id", "my-awesome-id") exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index 4513be742d..b3511235af 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -56,6 +56,7 @@ def set_trace_info(): def set_info(): txn = current_transaction() if txn: + txn.guid = "transaction-id" txn._trace_id = "trace-id" trace = current_trace() if trace: diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index c864e4f030..5a08649515 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -40,7 +40,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", @@ -77,7 +77,7 @@ "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "You are a scientist.", "role": "system", "completion_id": None, @@ -96,7 +96,7 @@ "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", "role": "user", "completion_id": None, @@ -115,7 +115,7 @@ "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", "role": "assistant", "completion_id": None, @@ -155,7 +155,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", @@ -192,7 +192,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "You are a scientist.", "role": "system", "completion_id": None, @@ -211,7 +211,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", "role": "user", "completion_id": None, @@ -230,7 +230,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "span_id": "span-id", "trace_id": "trace-id", - "transaction_id": None, + "transaction_id": "transaction-id", "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", "role": "assistant", "completion_id": None, diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py index 38b51d23f9..23e09b18af 100644 --- a/tests/mlmodel_openai/test_embeddings.py +++ b/tests/mlmodel_openai/test_embeddings.py @@ -33,7 +33,7 @@ { "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", - "transaction_id": None, + "transaction_id": "transaction-id", "span_id": "span-id", "trace_id": "trace-id", "input": "This is an embedding test.", From 7d4828ef34fbc298920ae375e9a2add0142e39aa Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Fri, 17 Nov 2023 14:57:56 -0800 Subject: [PATCH 006/199] Guard metadata logic (#981) * Guard against metadata overriding builin event data * Trigger tests * Use copy instead Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> --- newrelic/api/ml_model.py | 26 +++++++++---------- .../test_record_llm_feedback_event.py | 10 ++++--- 2 files changed, 20 insertions(+), 16 deletions(-) diff --git a/newrelic/api/ml_model.py b/newrelic/api/ml_model.py index 1951f91312..3d15cf8d37 100644 --- a/newrelic/api/ml_model.py +++ b/newrelic/api/ml_model.py @@ -67,18 +67,18 @@ def record_llm_feedback_event( return feedback_message_id = str(uuid.uuid4()) - metadata = metadata or {} - - feedback_message_event = { - "id": feedback_message_id, - "message_id": message_id, - "rating": rating, - "conversation_id": conversation_id or "", - "request_id": request_id or "", - "category": category or "", - "message": message or "", - "ingest_source": "Python", - } - feedback_message_event.update(metadata) + feedback_message_event = metadata.copy() if metadata else {} + feedback_message_event.update( + { + "id": feedback_message_id, + "message_id": message_id, + "rating": rating, + "conversation_id": conversation_id or "", + "request_id": request_id or "", + "category": category or "", + "message": message or "", + "ingest_source": "Python", + } + ) transaction.record_custom_event("LlmFeedbackMessage", feedback_message_event) diff --git a/tests/agent_features/test_record_llm_feedback_event.py b/tests/agent_features/test_record_llm_feedback_event.py index c9489c050e..1adf9d3bef 100644 --- a/tests/agent_features/test_record_llm_feedback_event.py +++ b/tests/agent_features/test_record_llm_feedback_event.py @@ -12,8 +12,12 @@ # See the License for the specific language governing permissions and # limitations under the License. -from testing_support.fixtures import reset_core_stats_engine, validate_custom_event_count +from testing_support.fixtures import ( + reset_core_stats_engine, + validate_custom_event_count, +) from testing_support.validators.validate_custom_events import validate_custom_events + from newrelic.api.background_task import background_task from newrelic.api.ml_model import record_llm_feedback_event @@ -36,7 +40,7 @@ def test_record_llm_feedback_event_all_args_supplied(): }, ), ] - + @validate_custom_events(llm_feedback_all_args_recorded_events) @background_task() def _test(): @@ -47,7 +51,7 @@ def _test(): request_id="request_id", conversation_id="conversation_id", message="message", - metadata={"foo": "bar"}, + metadata={"foo": "bar", "message": "custom-message"}, ) _test() From 091a815ab904c5f6795a4af739caa2b4feb41a03 Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Mon, 4 Dec 2023 15:09:12 -0800 Subject: [PATCH 007/199] Refactor OpenAI Error Tracing (#987) * Error refactoring and is_response. * Fix merge conflicts. * Update dictionary merging syntax. * Remove breakpoint. * Address review feedback. * Formatting. * Formatting tests. * Address linting errors. * [Mega-Linter] Apply linters fixes * Trigger tests * Add test fixes. * [Mega-Linter] Apply linters fixes * Trigger tests * Separate message input and output lists. * Uncomment tests. * Remove error_response_id. --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek --- newrelic/hooks/mlmodel_openai.py | 487 ++++++++++++------ tests/mlmodel_openai/test_chat_completion.py | 2 + .../test_chat_completion_error.py | 327 +++++++++--- tests/mlmodel_openai/test_embeddings_error.py | 169 ++++-- 4 files changed, 727 insertions(+), 258 deletions(-) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 458b01cd6f..1cac395928 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -27,73 +27,85 @@ OPENAI_VERSION = get_package_version("openai") -def openai_error_attributes(exception, request_args): - api_key = getattr(openai, "api_key", None) - api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" - number_of_messages = len(request_args.get("messages", [])) - - error_attributes = { - "api_key_last_four_digits": api_key_last_four_digits, - "request.model": request_args.get("model") or request_args.get("engine") or "", - "request.temperature": request_args.get("temperature", ""), - "request.max_tokens": request_args.get("max_tokens", ""), - "vendor": "openAI", - "ingest_source": "Python", - "response.organization": getattr(exception, "organization", ""), - "response.number_of_messages": number_of_messages, - "http.statusCode": getattr(exception, "http_status", ""), - "error.message": getattr(exception, "_message", ""), - "error.code": getattr(getattr(exception, "error", ""), "code", ""), - "error.param": getattr(exception, "param", ""), - } - return error_attributes - - def wrap_embedding_create(wrapped, instance, args, kwargs): transaction = current_transaction() if not transaction: return wrapped(*args, **kwargs) + # Framework metric also used for entity tagging in the UI transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + # Obtain attributes to be stored on embedding events regardless of whether we hit an error + embedding_id = str(uuid.uuid4()) + + # Get API key without using the response so we can store it before the response is returned in case of errors + api_key = getattr(openai, "api_key", None) + api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" + + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + settings = transaction.settings if transaction.settings is not None else global_settings() + ft_name = callable_name(wrapped) with FunctionTrace(ft_name) as ft: try: response = wrapped(*args, **kwargs) except Exception as exc: - error_attributes = openai_error_attributes(exc, kwargs) - exc._nr_message = error_attributes.pop("error.message") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "embedding_id": embedding_id, + } + exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( - attributes=error_attributes, + attributes=notice_error_attributes, ) + # Gather attributes to add to embedding summary event in error context + exc_organization = getattr(exc, "organization", "") + error_embedding_dict = { + "id": embedding_id, + "appName": settings.app_name, + "api_key_last_four_digits": api_key_last_four_digits, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "input": kwargs.get("input", ""), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "vendor": "openAI", + "ingest_source": "Python", + "response.organization": "" if exc_organization is None else exc_organization, + "duration": ft.duration, + "error": True, + } + + transaction.record_custom_event("LlmEmbedding", error_embedding_dict) + raise if not response: return response - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") - embedding_id = str(uuid.uuid4()) - - response_headers = getattr(response, "_nr_response_headers", None) - request_id = response_headers.get("x-request-id", "") if response_headers else "" response_model = response.get("model", "") response_usage = response.get("usage", {}) + response_headers = getattr(response, "_nr_response_headers", None) + request_id = response_headers.get("x-request-id", "") if response_headers else "" - settings = transaction.settings if transaction.settings is not None else global_settings() - - embedding_dict = { + full_embedding_response_dict = { "id": embedding_id, "appName": settings.app_name, + "api_key_last_four_digits": api_key_last_four_digits, "span_id": span_id, "trace_id": trace_id, - "request_id": request_id, "transaction_id": transaction.guid, "input": kwargs.get("input", ""), - "api_key_last_four_digits": f"sk-{response.api_key[-4:]}", - "duration": ft.duration, "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "request_id": request_id, + "duration": ft.duration, "response.model": response_model, "response.organization": response.organization, "response.api_type": response.api_type, @@ -122,7 +134,7 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): "ingest_source": "Python", } - transaction.record_custom_event("LlmEmbedding", embedding_dict) + transaction.record_custom_event("LlmEmbedding", full_embedding_response_dict) return response @@ -133,61 +145,117 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): if not transaction: return wrapped(*args, **kwargs) + # Framework metric also used for entity tagging in the UI transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + request_message_list = kwargs.get("messages", []) + + # Get API key without using the response so we can store it before the response is returned in case of errors + api_key = getattr(openai, "api_key", None) + api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" + + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + # Get conversation ID off of the transaction + custom_attrs_dict = transaction._custom_params + conversation_id = custom_attrs_dict.get("conversation_id", "") + + settings = transaction.settings if transaction.settings is not None else global_settings() + app_name = settings.app_name + completion_id = str(uuid.uuid4()) + ft_name = callable_name(wrapped) with FunctionTrace(ft_name) as ft: try: response = wrapped(*args, **kwargs) except Exception as exc: - error_attributes = openai_error_attributes(exc, kwargs) - exc._nr_message = error_attributes.pop("error.message") + exc_organization = getattr(exc, "organization", "") + + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "completion_id": completion_id, + } + exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( - attributes=error_attributes, + attributes=notice_error_attributes, ) + # Gather attributes to add to embedding summary event in error context + error_chat_completion_dict = { + "id": completion_id, + "appName": app_name, + "conversation_id": conversation_id, + "api_key_last_four_digits": api_key_last_four_digits, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "response.number_of_messages": len(request_message_list), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "request.temperature": kwargs.get("temperature", ""), + "request.max_tokens": kwargs.get("max_tokens", ""), + "vendor": "openAI", + "ingest_source": "Python", + "response.organization": "" if exc_organization is None else exc_organization, + "duration": ft.duration, + "error": True, + } + transaction.record_custom_event("LlmChatCompletionSummary", error_chat_completion_dict) + + create_chat_completion_message_event( + transaction, + app_name, + request_message_list, + completion_id, + span_id, + trace_id, + "", + None, + "", + conversation_id, + None, + ) + raise if not response: return response - custom_attrs_dict = transaction._custom_params - conversation_id = custom_attrs_dict.get("conversation_id", "") - - chat_completion_id = str(uuid.uuid4()) - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") - + # At this point, we have a response so we can grab attributes only available on the response object response_headers = getattr(response, "_nr_response_headers", None) response_model = response.get("model", "") - settings = transaction.settings if transaction.settings is not None else global_settings() response_id = response.get("id") request_id = response_headers.get("x-request-id", "") - api_key = getattr(response, "api_key", None) response_usage = response.get("usage", {}) messages = kwargs.get("messages", []) choices = response.get("choices", []) - chat_completion_summary_dict = { - "id": chat_completion_id, - "appName": settings.app_name, + full_chat_completion_summary_dict = { + "id": completion_id, + "appName": app_name, "conversation_id": conversation_id, + "api_key_last_four_digits": api_key_last_four_digits, "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "request.temperature": kwargs.get("temperature", ""), + "request.max_tokens": kwargs.get("max_tokens", ""), + "vendor": "openAI", + "ingest_source": "Python", "request_id": request_id, - "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", "duration": ft.duration, - "request.model": kwargs.get("model") or kwargs.get("engine") or "", "response.model": response_model, "response.organization": getattr(response, "organization", ""), "response.usage.completion_tokens": response_usage.get("completion_tokens", "") if any(response_usage) else "", "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", - "request.temperature": kwargs.get("temperature", ""), - "request.max_tokens": kwargs.get("max_tokens", ""), "response.choices.finish_reason": choices[0].finish_reason if choices else "", "response.api_type": getattr(response, "api_type", ""), "response.headers.llmVersion": response_headers.get("openai-version", ""), @@ -209,31 +277,29 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): "response.headers.ratelimitRemainingRequests": check_rate_limit_header( response_headers, "x-ratelimit-remaining-requests", True ), - "vendor": "openAI", - "ingest_source": "Python", "response.number_of_messages": len(messages) + len(choices), } - transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) + transaction.record_custom_event("LlmChatCompletionSummary", full_chat_completion_summary_dict) - message_list = list(messages) - if choices: - message_list.extend([choices[0].message]) + input_message_list = list(messages) + output_message_list = [choices[0].message] if choices else None message_ids = create_chat_completion_message_event( transaction, settings.app_name, - message_list, - chat_completion_id, + input_message_list, + completion_id, span_id, trace_id, response_model, response_id, request_id, conversation_id, + output_message_list, ) - # Cache message ids on transaction for retrieval after open ai call completion. + # Cache message IDs on transaction for retrieval after OpenAI call completion. if not hasattr(transaction, "_nr_message_ids"): transaction._nr_message_ids = {} transaction._nr_message_ids[response_id] = message_ids @@ -260,7 +326,7 @@ def check_rate_limit_header(response_headers, header_name, is_int): def create_chat_completion_message_event( transaction, app_name, - message_list, + input_message_list, chat_completion_id, span_id, trace_id, @@ -268,12 +334,24 @@ def create_chat_completion_message_event( response_id, request_id, conversation_id, + output_message_list, ): message_ids = [] - for index, message in enumerate(message_list): - message_id = "%s-%s" % (response_id, index) + + # Loop through all input messages received from the create request and emit a custom event for each one + for index, message in enumerate(input_message_list): + message_content = message.get("content", "") + + # Response ID was set, append message index to it. + if response_id: + message_id = "%s-%d" % (response_id, index) + # No response IDs, use random UUID + else: + message_id = str(uuid.uuid4()) + message_ids.append(message_id) - chat_completion_message_dict = { + + chat_completion_input_message_dict = { "id": message_id, "appName": app_name, "conversation_id": conversation_id, @@ -281,16 +359,53 @@ def create_chat_completion_message_event( "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, - "content": message.get("content", ""), + "content": message_content, "role": message.get("role", ""), "completion_id": chat_completion_id, "sequence": index, - "response.model": response_model, + "response.model": response_model if response_model else "", "vendor": "openAI", "ingest_source": "Python", } - - transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) + + transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_input_message_dict) + + if output_message_list: + # Loop through all output messages received from the LLM response and emit a custom event for each one + for index, message in enumerate(output_message_list): + message_content = message.get("content", "") + + # Add offset of input_message_length so we don't receive any duplicate index values that match the input message IDs + index += len(input_message_list) + + # Response ID was set, append message index to it. + if response_id: + message_id = "%s-%d" % (response_id, index) + # No response IDs, use random UUID + else: + message_id = str(uuid.uuid4()) + + message_ids.append(message_id) + + chat_completion_output_message_dict = { + "id": message_id, + "appName": app_name, + "conversation_id": conversation_id, + "request_id": request_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "content": message_content, + "role": message.get("role", ""), + "completion_id": chat_completion_id, + "sequence": index, + "response.model": response_model if response_model else "", + "vendor": "openAI", + "ingest_source": "Python", + "is_response": True, + } + + transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_output_message_dict) return (conversation_id, request_id, message_ids) @@ -300,55 +415,85 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): if not transaction: return await wrapped(*args, **kwargs) + # Framework metric also used for entity tagging in the UI transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + # Obtain attributes to be stored on embedding events regardless of whether we hit an error + embedding_id = str(uuid.uuid4()) + + # Get API key without using the response so we can store it before the response is returned in case of errors + api_key = getattr(openai, "api_key", None) + api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" + + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + settings = transaction.settings if transaction.settings is not None else global_settings() + ft_name = callable_name(wrapped) with FunctionTrace(ft_name) as ft: try: response = await wrapped(*args, **kwargs) except Exception as exc: - error_attributes = openai_error_attributes(exc, kwargs) - exc._nr_message = error_attributes.pop("error.message") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "embedding_id": embedding_id, + } + exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( - attributes=error_attributes, + attributes=notice_error_attributes, ) + # Gather attributes to add to embedding summary event in error context + exc_organization = getattr(exc, "organization", "") + error_embedding_dict = { + "id": embedding_id, + "appName": settings.app_name, + "api_key_last_four_digits": api_key_last_four_digits, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "input": kwargs.get("input", ""), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "vendor": "openAI", + "ingest_source": "Python", + "response.organization": "" if exc_organization is None else exc_organization, + "duration": ft.duration, + "error": True, + } + + transaction.record_custom_event("LlmEmbedding", error_embedding_dict) + raise if not response: return response - embedding_id = str(uuid.uuid4()) + response_model = response.get("model", "") + response_usage = response.get("usage", {}) response_headers = getattr(response, "_nr_response_headers", None) + request_id = response_headers.get("x-request-id", "") if response_headers else "" - settings = transaction.settings if transaction.settings is not None else global_settings() - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") - - api_key = getattr(response, "api_key", None) - usage = response.get("usage") - total_tokens = "" - prompt_tokens = "" - if usage: - total_tokens = usage.get("total_tokens", "") - prompt_tokens = usage.get("prompt_tokens", "") - - embedding_dict = { + full_embedding_response_dict = { "id": embedding_id, - "duration": ft.duration, - "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", - "request_id": response_headers.get("x-request-id", ""), - "input": kwargs.get("input", ""), - "response.api_type": getattr(response, "api_type", ""), - "response.organization": getattr(response, "organization", ""), - "request.model": kwargs.get("model") or kwargs.get("engine") or "", - "response.model": response.get("model", ""), "appName": settings.app_name, + "api_key_last_four_digits": api_key_last_four_digits, + "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, - "span_id": span_id, - "response.usage.total_tokens": total_tokens, - "response.usage.prompt_tokens": prompt_tokens, + "input": kwargs.get("input", ""), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "request_id": request_id, + "duration": ft.duration, + "response.model": response_model, + "response.organization": response.organization, + "response.api_type": response.api_type, + "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", + "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", "response.headers.llmVersion": response_headers.get("openai-version", ""), "response.headers.ratelimitLimitRequests": check_rate_limit_header( response_headers, "x-ratelimit-limit-requests", True @@ -372,7 +517,7 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): "ingest_source": "Python", } - transaction.record_custom_event("LlmEmbedding", embedding_dict) + transaction.record_custom_event("LlmEmbedding", full_embedding_response_dict) return response @@ -383,69 +528,118 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): if not transaction: return await wrapped(*args, **kwargs) + # Framework metric also used for entity tagging in the UI transaction.add_ml_model_info("OpenAI", OPENAI_VERSION) + request_message_list = kwargs.get("messages", []) + + # Get API key without using the response so we can store it before the response is returned in case of errors + api_key = getattr(openai, "api_key", None) + api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" + + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + + # Get conversation ID off of the transaction + custom_attrs_dict = transaction._custom_params + conversation_id = custom_attrs_dict.get("conversation_id", "") + + settings = transaction.settings if transaction.settings is not None else global_settings() + app_name = settings.app_name + completion_id = str(uuid.uuid4()) + ft_name = callable_name(wrapped) with FunctionTrace(ft_name) as ft: try: response = await wrapped(*args, **kwargs) except Exception as exc: - error_attributes = openai_error_attributes(exc, kwargs) - exc._nr_message = error_attributes.pop("error.message") + exc_organization = getattr(exc, "organization", "") + + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "completion_id": completion_id, + } + exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( - attributes=error_attributes, + attributes=notice_error_attributes, ) + # Gather attributes to add to embedding summary event in error context + error_chat_completion_dict = { + "id": completion_id, + "appName": app_name, + "conversation_id": conversation_id, + "api_key_last_four_digits": api_key_last_four_digits, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "response.number_of_messages": len(request_message_list), + "request.model": kwargs.get("model") or kwargs.get("engine") or "", + "request.temperature": kwargs.get("temperature", ""), + "request.max_tokens": kwargs.get("max_tokens", ""), + "vendor": "openAI", + "ingest_source": "Python", + "response.organization": "" if exc_organization is None else exc_organization, + "duration": ft.duration, + "error": True, + } + transaction.record_custom_event("LlmChatCompletionSummary", error_chat_completion_dict) + + create_chat_completion_message_event( + transaction, + app_name, + request_message_list, + completion_id, + span_id, + trace_id, + "", + None, + "", + conversation_id, + None, + ) + raise if not response: return response - conversation_id = transaction._custom_params.get("conversation_id", "") - - chat_completion_id = str(uuid.uuid4()) - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") - + # At this point, we have a response so we can grab attributes only available on the response object response_headers = getattr(response, "_nr_response_headers", None) response_model = response.get("model", "") - settings = transaction.settings if transaction.settings is not None else global_settings() response_id = response.get("id") request_id = response_headers.get("x-request-id", "") - api_key = getattr(response, "api_key", None) - usage = response.get("usage") - total_tokens = "" - prompt_tokens = "" - completion_tokens = "" - if usage: - total_tokens = usage.get("total_tokens", "") - prompt_tokens = usage.get("prompt_tokens", "") - completion_tokens = usage.get("completion_tokens", "") + response_usage = response.get("usage", {}) messages = kwargs.get("messages", []) choices = response.get("choices", []) - chat_completion_summary_dict = { - "id": chat_completion_id, - "appName": settings.app_name, + full_chat_completion_summary_dict = { + "id": completion_id, + "appName": app_name, "conversation_id": conversation_id, - "request_id": request_id, + "api_key_last_four_digits": api_key_last_four_digits, "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, - "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", - "duration": ft.duration, "request.model": kwargs.get("model") or kwargs.get("engine") or "", - "response.model": response_model, - "response.organization": getattr(response, "organization", ""), - "response.usage.completion_tokens": completion_tokens, - "response.usage.total_tokens": total_tokens, - "response.usage.prompt_tokens": prompt_tokens, - "response.number_of_messages": len(messages) + len(choices), "request.temperature": kwargs.get("temperature", ""), "request.max_tokens": kwargs.get("max_tokens", ""), - "response.choices.finish_reason": choices[0].get("finish_reason", "") if choices else "", + "vendor": "openAI", + "ingest_source": "Python", + "request_id": request_id, + "duration": ft.duration, + "response.model": response_model, + "response.organization": getattr(response, "organization", ""), + "response.usage.completion_tokens": response_usage.get("completion_tokens", "") if any(response_usage) else "", + "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", + "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", + "response.choices.finish_reason": choices[0].finish_reason if choices else "", "response.api_type": getattr(response, "api_type", ""), "response.headers.llmVersion": response_headers.get("openai-version", ""), "response.headers.ratelimitLimitRequests": check_rate_limit_header( @@ -466,27 +660,26 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): "response.headers.ratelimitRemainingRequests": check_rate_limit_header( response_headers, "x-ratelimit-remaining-requests", True ), - "vendor": "openAI", - "ingest_source": "Python", + "response.number_of_messages": len(messages) + len(choices), } - transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) + transaction.record_custom_event("LlmChatCompletionSummary", full_chat_completion_summary_dict) - message_list = list(messages) - if choices: - message_list.extend([choices[0].message]) + input_message_list = list(messages) + output_message_list = [choices[0].message] if choices else None message_ids = create_chat_completion_message_event( transaction, settings.app_name, - message_list, - chat_completion_id, + input_message_list, + completion_id, span_id, trace_id, response_model, response_id, request_id, conversation_id, + output_message_list, ) # Cache message ids on transaction for retrieval after open ai call completion. diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index 5a08649515..2408a6a727 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -122,6 +122,7 @@ "sequence": 2, "response.model": "gpt-3.5-turbo-0613", "vendor": "openAI", + "is_response": True, "ingest_source": "Python", }, ), @@ -237,6 +238,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "sequence": 2, "response.model": "gpt-3.5-turbo-0613", "vendor": "openAI", + "is_response": True, "ingest_source": "Python", }, ), diff --git a/tests/mlmodel_openai/test_chat_completion_error.py b/tests/mlmodel_openai/test_chat_completion_error.py index c826b0b324..99047d94a8 100644 --- a/tests/mlmodel_openai/test_chat_completion_error.py +++ b/tests/mlmodel_openai/test_chat_completion_error.py @@ -14,13 +14,19 @@ import openai import pytest -from testing_support.fixtures import dt_enabled, reset_core_stats_engine +from testing_support.fixtures import ( + dt_enabled, + reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) from testing_support.validators.validate_span_events import validate_span_events from newrelic.api.background_task import background_task +from newrelic.api.transaction import add_custom_attribute from newrelic.common.object_names import callable_name _test_openai_chat_completion_messages = ( @@ -28,8 +34,68 @@ {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, ) - # Sync tests: +expected_events_on_no_model_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": "span-id", + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "", # No model in this test case + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 2, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "response.model": "", + "sequence": 1, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] # No model provided @@ -41,12 +107,6 @@ "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 2, "error.param": "engine", }, }, @@ -56,9 +116,13 @@ "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_custom_events(expected_events_on_no_model_error) +@validate_custom_event_count(count=3) @background_task() -def test_chat_completion_invalid_request_error_no_model(): +def test_chat_completion_invalid_request_error_no_model(set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") openai.ChatCompletion.create( # no model provided, messages=_test_openai_chat_completion_messages, @@ -67,6 +131,50 @@ def test_chat_completion_invalid_request_error_no_model(): ) +expected_events_on_invalid_model_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": "span-id", + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "does-not-exist", + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 1, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Model does not exist.", + "role": "user", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + # Invalid model provided @dt_enabled @reset_core_stats_engine() @@ -76,13 +184,6 @@ def test_chat_completion_invalid_request_error_no_model(): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.model": "does-not-exist", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 1, "error.code": "model_not_found", "http.statusCode": 404, }, @@ -93,9 +194,13 @@ def test_chat_completion_invalid_request_error_no_model(): "error.message": "The model `does-not-exist` does not exist", } ) +@validate_custom_events(expected_events_on_invalid_model_error) +@validate_custom_event_count(count=2) @background_task() -def test_chat_completion_invalid_request_error_invalid_model(): +def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") openai.ChatCompletion.create( model="does-not-exist", messages=({"role": "user", "content": "Model does not exist."},), @@ -104,6 +209,69 @@ def test_chat_completion_invalid_request_error_invalid_model(): ) +expected_events_on_auth_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": "span-id", + "trace_id": "trace-id", + "api_key_last_four_digits": "", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 2, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "response.model": "", + "sequence": 1, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + # No api_key provided @dt_enabled @reset_core_stats_engine() @@ -112,14 +280,7 @@ def test_chat_completion_invalid_request_error_invalid_model(): exact_attrs={ "agent": {}, "intrinsic": {}, - "user": { - "request.model": "gpt-3.5-turbo", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 2, - }, + "user": {}, }, ) @validate_span_events( @@ -127,9 +288,13 @@ def test_chat_completion_invalid_request_error_invalid_model(): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_custom_events(expected_events_on_auth_error) +@validate_custom_event_count(count=3) @background_task() -def test_chat_completion_authentication_error(monkeypatch): +def test_chat_completion_authentication_error(monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None openai.ChatCompletion.create( model="gpt-3.5-turbo", @@ -139,6 +304,50 @@ def test_chat_completion_authentication_error(monkeypatch): ) +expected_events_on_wrong_api_key_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-BEEF", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 1, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "", + "span_id": "span-id", + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid API key.", + "role": "user", + "completion_id": None, + "response.model": "", + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + # Wrong api_key provided @dt_enabled @reset_core_stats_engine() @@ -148,13 +357,6 @@ def test_chat_completion_authentication_error(monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-BEEF", - "request.model": "gpt-3.5-turbo", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 1, "http.statusCode": 401, }, }, @@ -164,9 +366,12 @@ def test_chat_completion_authentication_error(monkeypatch): "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_custom_events(expected_events_on_wrong_api_key_error) +@validate_custom_event_count(count=2) @background_task() -def test_chat_completion_wrong_api_key_error(monkeypatch): +def test_chat_completion_wrong_api_key_error(monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" openai.ChatCompletion.create( model="gpt-3.5-turbo", @@ -177,8 +382,6 @@ def test_chat_completion_wrong_api_key_error(monkeypatch): # Async tests: - - # No model provided @dt_enabled @reset_core_stats_engine() @@ -188,12 +391,6 @@ def test_chat_completion_wrong_api_key_error(monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 2, "error.param": "engine", }, }, @@ -203,9 +400,13 @@ def test_chat_completion_wrong_api_key_error(monkeypatch): "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_custom_events(expected_events_on_no_model_error) +@validate_custom_event_count(count=3) @background_task() -def test_chat_completion_invalid_request_error_no_model_async(loop): +def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") loop.run_until_complete( openai.ChatCompletion.acreate( # no model provided, @@ -225,13 +426,6 @@ def test_chat_completion_invalid_request_error_no_model_async(loop): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.model": "does-not-exist", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 1, "error.code": "model_not_found", "http.statusCode": 404, }, @@ -242,9 +436,13 @@ def test_chat_completion_invalid_request_error_no_model_async(loop): "error.message": "The model `does-not-exist` does not exist", } ) +@validate_custom_events(expected_events_on_invalid_model_error) +@validate_custom_event_count(count=2) @background_task() -def test_chat_completion_invalid_request_error_invalid_model_async(loop): +def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") loop.run_until_complete( openai.ChatCompletion.acreate( model="does-not-exist", @@ -263,14 +461,7 @@ def test_chat_completion_invalid_request_error_invalid_model_async(loop): exact_attrs={ "agent": {}, "intrinsic": {}, - "user": { - "request.model": "gpt-3.5-turbo", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 2, - }, + "user": {}, }, ) @validate_span_events( @@ -278,9 +469,13 @@ def test_chat_completion_invalid_request_error_invalid_model_async(loop): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_custom_events(expected_events_on_auth_error) +@validate_custom_event_count(count=3) @background_task() -def test_chat_completion_authentication_error_async(loop, monkeypatch): +def test_chat_completion_authentication_error_async(loop, monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None loop.run_until_complete( openai.ChatCompletion.acreate( @@ -298,13 +493,6 @@ def test_chat_completion_authentication_error_async(loop, monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-BEEF", - "request.model": "gpt-3.5-turbo", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "openAI", - "ingest_source": "Python", - "response.number_of_messages": 1, "http.statusCode": 401, }, }, @@ -314,9 +502,12 @@ def test_chat_completion_authentication_error_async(loop, monkeypatch): "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_custom_events(expected_events_on_wrong_api_key_error) +@validate_custom_event_count(count=2) @background_task() -def test_chat_completion_wrong_api_key_error_async(loop, monkeypatch): +def test_chat_completion_wrong_api_key_error_async(loop, monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" loop.run_until_complete( openai.ChatCompletion.acreate( diff --git a/tests/mlmodel_openai/test_embeddings_error.py b/tests/mlmodel_openai/test_embeddings_error.py index 35d189ff50..3dc6b4cbec 100644 --- a/tests/mlmodel_openai/test_embeddings_error.py +++ b/tests/mlmodel_openai/test_embeddings_error.py @@ -14,7 +14,12 @@ import openai import pytest -from testing_support.fixtures import dt_enabled, reset_core_stats_engine +from testing_support.fixtures import ( + dt_enabled, + reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) @@ -24,6 +29,26 @@ from newrelic.common.object_names import callable_name # Sync tests: +embedding_recorded_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": "span-id", + "trace_id": "trace-id", + "input": "This is an embedding test with no model.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "", # No model in this test case + "response.organization": "", + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] # No model provided @@ -35,9 +60,6 @@ "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "vendor": "openAI", - "ingest_source": "Python", "error.param": "engine", }, }, @@ -47,15 +69,40 @@ "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_invalid_request_error_no_model(): +def test_embeddings_invalid_request_error_no_model(set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() openai.Embedding.create( input="This is an embedding test with no model.", # no model provided ) +invalid_model_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": "span-id", + "trace_id": "trace-id", + "input": "Model does not exist.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "does-not-exist", # No model in this test case + "response.organization": None, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + + # Invalid model provided @dt_enabled @reset_core_stats_engine() @@ -65,11 +112,6 @@ def test_embeddings_invalid_request_error_no_model(): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.model": "does-not-exist", - "vendor": "openAI", - "ingest_source": "Python", - "error.code": "model_not_found", "http.statusCode": 404, }, }, @@ -80,12 +122,37 @@ def test_embeddings_invalid_request_error_no_model(): # "http.statusCode": 404, } ) +@validate_custom_events(invalid_model_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_invalid_request_error_invalid_model(): +def test_embeddings_invalid_request_error_invalid_model(set_trace_info): + set_trace_info() with pytest.raises(openai.InvalidRequestError): openai.Embedding.create(input="Model does not exist.", model="does-not-exist") +embedding_auth_error_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": "span-id", + "trace_id": "trace-id", + "input": "Invalid API key.", + "api_key_last_four_digits": "", + "duration": None, # Response time varies each test run + "request.model": "text-embedding-ada-002", # No model in this test case + "response.organization": None, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + + # No api_key provided @dt_enabled @reset_core_stats_engine() @@ -94,11 +161,7 @@ def test_embeddings_invalid_request_error_invalid_model(): exact_attrs={ "agent": {}, "intrinsic": {}, - "user": { - "request.model": "text-embedding-ada-002", - "vendor": "openAI", - "ingest_source": "Python", - }, + "user": {}, }, ) @validate_span_events( @@ -106,13 +169,38 @@ def test_embeddings_invalid_request_error_invalid_model(): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_custom_events(embedding_auth_error_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_authentication_error(monkeypatch): +def test_embeddings_authentication_error(monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None openai.Embedding.create(input="Invalid API key.", model="text-embedding-ada-002") +embedding_invalid_key_error_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": "span-id", + "trace_id": "trace-id", + "input": "Embedded: Invalid API key.", + "api_key_last_four_digits": "sk-BEEF", + "duration": None, # Response time varies each test run + "request.model": "text-embedding-ada-002", # No model in this test case + "response.organization": None, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + + # Wrong api_key provided @dt_enabled @reset_core_stats_engine() @@ -122,10 +210,6 @@ def test_embeddings_authentication_error(monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-BEEF", - "request.model": "text-embedding-ada-002", - "vendor": "openAI", - "ingest_source": "Python", "http.statusCode": 401, }, }, @@ -135,9 +219,12 @@ def test_embeddings_authentication_error(monkeypatch): "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_custom_events(embedding_invalid_key_error_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_wrong_api_key_error(monkeypatch): +def test_embeddings_wrong_api_key_error(monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" openai.Embedding.create(input="Embedded: Invalid API key.", model="text-embedding-ada-002") @@ -154,9 +241,6 @@ def test_embeddings_wrong_api_key_error(monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "vendor": "openAI", - "ingest_source": "Python", "error.param": "engine", }, }, @@ -166,9 +250,12 @@ def test_embeddings_wrong_api_key_error(monkeypatch): "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_invalid_request_error_no_model_async(loop): +def test_embeddings_invalid_request_error_no_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() loop.run_until_complete( openai.Embedding.acreate( input="This is an embedding test with no model.", @@ -186,11 +273,6 @@ def test_embeddings_invalid_request_error_no_model_async(loop): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-CRET", - "request.model": "does-not-exist", - "vendor": "openAI", - "ingest_source": "Python", - "error.code": "model_not_found", "http.statusCode": 404, }, }, @@ -200,9 +282,12 @@ def test_embeddings_invalid_request_error_no_model_async(loop): "error.message": "The model `does-not-exist` does not exist", } ) +@validate_custom_events(invalid_model_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_invalid_request_error_invalid_model_async(loop): +def test_embeddings_invalid_request_error_invalid_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): + set_trace_info() loop.run_until_complete(openai.Embedding.acreate(input="Model does not exist.", model="does-not-exist")) @@ -214,11 +299,7 @@ def test_embeddings_invalid_request_error_invalid_model_async(loop): exact_attrs={ "agent": {}, "intrinsic": {}, - "user": { - "request.model": "text-embedding-ada-002", - "vendor": "openAI", - "ingest_source": "Python", - }, + "user": {}, }, ) @validate_span_events( @@ -226,9 +307,12 @@ def test_embeddings_invalid_request_error_invalid_model_async(loop): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_custom_events(embedding_auth_error_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_authentication_error_async(loop, monkeypatch): +def test_embeddings_authentication_error_async(loop, monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None loop.run_until_complete(openai.Embedding.acreate(input="Invalid API key.", model="text-embedding-ada-002")) @@ -242,10 +326,6 @@ def test_embeddings_authentication_error_async(loop, monkeypatch): "agent": {}, "intrinsic": {}, "user": { - "api_key_last_four_digits": "sk-BEEF", - "request.model": "text-embedding-ada-002", - "vendor": "openAI", - "ingest_source": "Python", "http.statusCode": 401, }, }, @@ -255,9 +335,12 @@ def test_embeddings_authentication_error_async(loop, monkeypatch): "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_custom_events(embedding_invalid_key_error_events) +@validate_custom_event_count(count=1) @background_task() -def test_embeddings_wrong_api_key_error_async(loop, monkeypatch): +def test_embeddings_wrong_api_key_error_async(loop, monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): + set_trace_info() monkeypatch.setattr(openai, "api_key", "DEADBEEF") # openai.api_key = "DEADBEEF" loop.run_until_complete( openai.Embedding.acreate(input="Embedded: Invalid API key.", model="text-embedding-ada-002") From b91525500377a5ba6d6958df85bb1c4f169894a7 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Mon, 4 Dec 2023 16:36:10 -0800 Subject: [PATCH 008/199] Use named function traces (#992) * Use named function traces * Use named function traces in bedrock * Fixup: too many blank lines --- newrelic/hooks/external_botocore.py | 7 +- newrelic/hooks/mlmodel_openai.py | 21 +++--- .../test_bedrock_chat_completion.py | 26 ++++++- .../test_bedrock_embeddings.py | 14 +++- tests/mlmodel_openai/test_chat_completion.py | 18 +++++ .../test_chat_completion_error.py | 51 +++++++++++++ tests/mlmodel_openai/test_embeddings.py | 8 ++ tests/mlmodel_openai/test_embeddings_error.py | 75 +++++++++++++++++++ 8 files changed, 203 insertions(+), 17 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 72083b2abd..3a463284c4 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -25,7 +25,6 @@ from newrelic.api.message_trace import message_trace from newrelic.api.time_trace import get_trace_linking_metadata from newrelic.api.transaction import current_transaction -from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import function_wrapper, wrap_function_wrapper from newrelic.common.package_version_utils import get_package_version from newrelic.core.config import global_settings @@ -312,8 +311,10 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): extractor = lambda *args: ([], {}) # Empty extractor that returns nothing - ft_name = callable_name(wrapped) - with FunctionTrace(ft_name) as ft: + function_name = wrapped.__name__ + operation = "embedding" if model.startswith("amazon.titan-embed") else "completion" + + with FunctionTrace(name=function_name, group="Llm/%s/Bedrock" % (operation)) as ft: try: response = wrapped(*args, **kwargs) except Exception as exc: diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 1cac395928..bd80f6aac5 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -19,7 +19,6 @@ from newrelic.api.function_trace import FunctionTrace from newrelic.api.time_trace import get_trace_linking_metadata from newrelic.api.transaction import current_transaction -from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import wrap_function_wrapper from newrelic.common.package_version_utils import get_package_version from newrelic.core.config import global_settings @@ -49,8 +48,9 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): settings = transaction.settings if transaction.settings is not None else global_settings() - ft_name = callable_name(wrapped) - with FunctionTrace(ft_name) as ft: + function_name = wrapped.__name__ + + with FunctionTrace(name=function_name, group="Llm/embedding/OpenAI") as ft: try: response = wrapped(*args, **kwargs) except Exception as exc: @@ -167,8 +167,9 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): app_name = settings.app_name completion_id = str(uuid.uuid4()) - ft_name = callable_name(wrapped) - with FunctionTrace(ft_name) as ft: + function_name = wrapped.__name__ + + with FunctionTrace(name=function_name, group="Llm/completion/OpenAI") as ft: try: response = wrapped(*args, **kwargs) except Exception as exc: @@ -432,8 +433,9 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): settings = transaction.settings if transaction.settings is not None else global_settings() - ft_name = callable_name(wrapped) - with FunctionTrace(ft_name) as ft: + function_name = wrapped.__name__ + + with FunctionTrace(name=function_name, group="Llm/embedding/OpenAI") as ft: try: response = await wrapped(*args, **kwargs) except Exception as exc: @@ -550,8 +552,9 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): app_name = settings.app_name completion_id = str(uuid.uuid4()) - ft_name = callable_name(wrapped) - with FunctionTrace(ft_name) as ft: + function_name = wrapped.__name__ + + with FunctionTrace(name=function_name, group="Llm/completion/OpenAI") as ft: try: response = await wrapped(*args, **kwargs) except Exception as exc: diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index 29489191f7..e8cb2d985e 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -30,10 +30,10 @@ reset_core_stats_engine, validate_custom_event_count, ) +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) -from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) @@ -111,6 +111,8 @@ def test_bedrock_chat_completion_in_txn_with_convo_id(set_trace_info, exercise_m @validate_custom_event_count(count=3) @validate_transaction_metrics( name="test_bedrock_chat_completion_in_txn_with_convo_id", + scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], custom_metrics=[ ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), ], @@ -133,6 +135,8 @@ def test_bedrock_chat_completion_in_txn_no_convo_id(set_trace_info, exercise_mod @validate_custom_event_count(count=3) @validate_transaction_metrics( name="test_bedrock_chat_completion_in_txn_no_convo_id", + scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], custom_metrics=[ ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), ], @@ -194,6 +198,15 @@ def test_bedrock_chat_completion_disabled_settings(set_trace_info, exercise_mode }, }, ) +@validate_transaction_metrics( + name="test_bedrock_chat_completion:test_bedrock_chat_completion_error_invalid_model", + scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, +) @background_task() def test_bedrock_chat_completion_error_invalid_model(bedrock_server, set_trace_info): set_trace_info() @@ -220,7 +233,16 @@ def test_bedrock_chat_completion_error_incorrect_access_key( "user": expected_client_error, }, ) - @background_task() + @validate_transaction_metrics( + name="test_bedrock_chat_completion", + scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, + ) + @background_task(name="test_bedrock_chat_completion") def _test(): monkeypatch.setattr(bedrock_server._request_signer._credentials, "access_key", "INVALID-ACCESS-KEY") diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py index 788e4ec867..d2353d94eb 100644 --- a/tests/external_botocore/test_bedrock_embeddings.py +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -27,12 +27,12 @@ dt_enabled, override_application_settings, reset_core_stats_engine, - validate_custom_event_count + validate_custom_event_count, ) +from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_error_trace_attributes import ( validate_error_trace_attributes, ) -from testing_support.validators.validate_custom_events import validate_custom_events from testing_support.validators.validate_transaction_metrics import ( validate_transaction_metrics, ) @@ -96,6 +96,8 @@ def test_bedrock_embedding(set_trace_info, exercise_model, expected_events): @validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_bedrock_embedding", + scoped_metrics=[("Llm/embedding/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/embedding/Bedrock/invoke_model", 1)], custom_metrics=[ ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), ], @@ -148,7 +150,13 @@ def test_bedrock_embedding_error_incorrect_access_key( "user": expected_client_error, }, ) - @background_task() + @validate_transaction_metrics( + name="test_bedrock_embedding", + scoped_metrics=[("Llm/embedding/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/embedding/Bedrock/invoke_model", 1)], + background_task=True, + ) + @background_task(name="test_bedrock_embedding") def _test(): monkeypatch.setattr(bedrock_server._request_signer._credentials, "access_key", "INVALID-ACCESS-KEY") diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index 2408a6a727..ec871b9476 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -249,6 +249,12 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): @validate_custom_events(chat_completion_recorded_events_no_convo_id) # One summary event, one system message, one user message, and one response message from the assistant @validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion:test_openai_chat_completion_sync_in_txn_no_convo_id", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) @background_task() def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info): set_trace_info() @@ -287,6 +293,12 @@ def test_openai_chat_completion_sync_custom_events_insights_disabled(set_trace_i @reset_core_stats_engine() @validate_custom_events(chat_completion_recorded_events_no_convo_id) @validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion:test_openai_chat_completion_async_conversation_id_unset", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @background_task() def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info): set_trace_info() @@ -301,6 +313,12 @@ def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info @reset_core_stats_engine() @validate_custom_events(chat_completion_recorded_events) @validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion:test_openai_chat_completion_async_conversation_id_set", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @validate_transaction_metrics( name="test_chat_completion:test_openai_chat_completion_async_conversation_id_set", custom_metrics=[ diff --git a/tests/mlmodel_openai/test_chat_completion_error.py b/tests/mlmodel_openai/test_chat_completion_error.py index 99047d94a8..812e7166e3 100644 --- a/tests/mlmodel_openai/test_chat_completion_error.py +++ b/tests/mlmodel_openai/test_chat_completion_error.py @@ -24,6 +24,9 @@ validate_error_trace_attributes, ) from testing_support.validators.validate_span_events import validate_span_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) from newrelic.api.background_task import background_task from newrelic.api.transaction import add_custom_attribute @@ -116,6 +119,12 @@ "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_invalid_request_error_no_model", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_no_model_error) @validate_custom_event_count(count=3) @background_task() @@ -194,6 +203,12 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info): "error.message": "The model `does-not-exist` does not exist", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_invalid_request_error_invalid_model", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_invalid_model_error) @validate_custom_event_count(count=2) @background_task() @@ -288,6 +303,12 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_authentication_error", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_auth_error) @validate_custom_event_count(count=3) @background_task() @@ -366,6 +387,12 @@ def test_chat_completion_authentication_error(monkeypatch, set_trace_info): "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_wrong_api_key_error", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_wrong_api_key_error) @validate_custom_event_count(count=2) @background_task() @@ -400,6 +427,12 @@ def test_chat_completion_wrong_api_key_error(monkeypatch, set_trace_info): "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_invalid_request_error_no_model_async", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_no_model_error) @validate_custom_event_count(count=3) @background_task() @@ -436,6 +469,12 @@ def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_in "error.message": "The model `does-not-exist` does not exist", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_invalid_request_error_invalid_model_async", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_invalid_model_error) @validate_custom_event_count(count=2) @background_task() @@ -469,6 +508,12 @@ def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_tra "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_authentication_error_async", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_auth_error) @validate_custom_event_count(count=3) @background_task() @@ -502,6 +547,12 @@ def test_chat_completion_authentication_error_async(loop, monkeypatch, set_trace "error.message": "Incorrect API key provided: invalid. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_transaction_metrics( + "test_chat_completion_error:test_chat_completion_wrong_api_key_error_async", + scoped_metrics=[("Llm/completion/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/completion/OpenAI/acreate", 1)], + background_task=True, +) @validate_custom_events(expected_events_on_wrong_api_key_error) @validate_custom_event_count(count=2) @background_task() diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py index 23e09b18af..24ce067ce9 100644 --- a/tests/mlmodel_openai/test_embeddings.py +++ b/tests/mlmodel_openai/test_embeddings.py @@ -65,6 +65,8 @@ @validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_sync", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], @@ -87,6 +89,8 @@ def test_openai_embedding_sync_outside_txn(): @validate_custom_event_count(count=0) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_sync_disabled_settings", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], @@ -103,6 +107,8 @@ def test_openai_embedding_sync_disabled_settings(set_trace_info): @validate_custom_event_count(count=1) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_async", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], @@ -130,6 +136,8 @@ def test_openai_embedding_async_outside_transaction(loop): @validate_custom_event_count(count=0) @validate_transaction_metrics( name="test_embeddings:test_openai_embedding_async_disabled_custom_insights_events", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], custom_metrics=[ ("Python/ML/OpenAI/%s" % openai.__version__, 1), ], diff --git a/tests/mlmodel_openai/test_embeddings_error.py b/tests/mlmodel_openai/test_embeddings_error.py index 3dc6b4cbec..97e53e048c 100644 --- a/tests/mlmodel_openai/test_embeddings_error.py +++ b/tests/mlmodel_openai/test_embeddings_error.py @@ -24,6 +24,9 @@ validate_error_trace_attributes, ) from testing_support.validators.validate_span_events import validate_span_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) from newrelic.api.background_task import background_task from newrelic.common.object_names import callable_name @@ -69,6 +72,15 @@ "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_invalid_request_error_no_model", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_recorded_events) @validate_custom_event_count(count=1) @background_task() @@ -122,6 +134,15 @@ def test_embeddings_invalid_request_error_no_model(set_trace_info): # "http.statusCode": 404, } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_invalid_request_error_invalid_model", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(invalid_model_events) @validate_custom_event_count(count=1) @background_task() @@ -169,6 +190,15 @@ def test_embeddings_invalid_request_error_invalid_model(set_trace_info): "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_authentication_error", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_auth_error_events) @validate_custom_event_count(count=1) @background_task() @@ -219,6 +249,15 @@ def test_embeddings_authentication_error(monkeypatch, set_trace_info): "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_wrong_api_key_error", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_invalid_key_error_events) @validate_custom_event_count(count=1) @background_task() @@ -250,6 +289,15 @@ def test_embeddings_wrong_api_key_error(monkeypatch, set_trace_info): "error.message": "Must provide an 'engine' or 'model' parameter to create a ", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_invalid_request_error_no_model_async", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_recorded_events) @validate_custom_event_count(count=1) @background_task() @@ -282,6 +330,15 @@ def test_embeddings_invalid_request_error_no_model_async(loop, set_trace_info): "error.message": "The model `does-not-exist` does not exist", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_invalid_request_error_invalid_model_async", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(invalid_model_events) @validate_custom_event_count(count=1) @background_task() @@ -307,6 +364,15 @@ def test_embeddings_invalid_request_error_invalid_model_async(loop, set_trace_in "error.message": "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_authentication_error_async", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_auth_error_events) @validate_custom_event_count(count=1) @background_task() @@ -335,6 +401,15 @@ def test_embeddings_authentication_error_async(loop, monkeypatch, set_trace_info "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", } ) +@validate_transaction_metrics( + name="test_embeddings_error:test_embeddings_wrong_api_key_error_async", + scoped_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/acreate", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @validate_custom_events(embedding_invalid_key_error_events) @validate_custom_event_count(count=1) @background_task() From 12a69f56463d25d58ae28f35246e092b22809d59 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 6 Dec 2023 10:35:31 -0800 Subject: [PATCH 009/199] Add early exit for streaming (#988) --- newrelic/hooks/mlmodel_openai.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index bd80f6aac5..9c7d9b3081 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -28,7 +28,7 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): transaction = current_transaction() - if not transaction: + if not transaction or kwargs.get("stream", False): return wrapped(*args, **kwargs) # Framework metric also used for entity tagging in the UI @@ -142,7 +142,7 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): def wrap_chat_completion_create(wrapped, instance, args, kwargs): transaction = current_transaction() - if not transaction: + if not transaction or kwargs.get("stream", False): return wrapped(*args, **kwargs) # Framework metric also used for entity tagging in the UI @@ -413,7 +413,7 @@ def create_chat_completion_message_event( async def wrap_embedding_acreate(wrapped, instance, args, kwargs): transaction = current_transaction() - if not transaction: + if not transaction or kwargs.get("stream", False): return await wrapped(*args, **kwargs) # Framework metric also used for entity tagging in the UI @@ -527,7 +527,7 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): transaction = current_transaction() - if not transaction: + if not transaction or kwargs.get("stream", False): return await wrapped(*args, **kwargs) # Framework metric also used for entity tagging in the UI From 3687f5eedd003fef2854ba9a0402a9cb4b677903 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 6 Dec 2023 14:02:57 -0800 Subject: [PATCH 010/199] Fix span id bug (#994) * Fix bug where span_id is incorrect The span_id should be the span id of the function trace, not the span above. * Remove overriding of span-id --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> --- newrelic/hooks/external_botocore.py | 40 ++++++++++++----- newrelic/hooks/mlmodel_openai.py | 44 ++++++++++++------- .../_test_bedrock_chat_completion.py | 24 +++++----- .../_test_bedrock_embeddings.py | 4 +- tests/external_botocore/conftest.py | 4 -- tests/mlmodel_openai/conftest.py | 4 -- tests/mlmodel_openai/test_chat_completion.py | 16 +++---- .../test_chat_completion_error.py | 20 ++++----- tests/mlmodel_openai/test_embeddings.py | 2 +- tests/mlmodel_openai/test_embeddings_error.py | 8 ++-- 10 files changed, 95 insertions(+), 71 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 3a463284c4..9f60fccf37 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -311,10 +311,18 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): extractor = lambda *args: ([], {}) # Empty extractor that returns nothing + span_id = None + trace_id = None + function_name = wrapped.__name__ operation = "embedding" if model.startswith("amazon.titan-embed") else "completion" with FunctionTrace(name=function_name, group="Llm/%s/Bedrock" % (operation)) as ft: + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + try: response = wrapped(*args, **kwargs) except Exception as exc: @@ -337,23 +345,38 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): if model.startswith("amazon.titan-embed"): # Only available embedding models handle_embedding_event( - instance, transaction, extractor, model, response_body, response_headers, request_body, ft.duration + instance, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + ft.duration, + trace_id, + span_id, ) else: handle_chat_completion_event( - instance, transaction, extractor, model, response_body, response_headers, request_body, ft.duration + instance, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + ft.duration, + trace_id, + span_id, ) return response def handle_embedding_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration + client, transaction, extractor, model, response_body, response_headers, request_body, duration, trace_id, span_id ): embedding_id = str(uuid.uuid4()) - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") request_id = response_headers.get("x-amzn-requestid", "") settings = transaction.settings if transaction.settings is not None else global_settings() @@ -381,15 +404,12 @@ def handle_embedding_event( def handle_chat_completion_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration + client, transaction, extractor, model, response_body, response_headers, request_body, duration, trace_id, span_id ): custom_attrs_dict = transaction._custom_params conversation_id = custom_attrs_dict.get("conversation_id", "") chat_completion_id = str(uuid.uuid4()) - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") request_id = response_headers.get("x-amzn-requestid", "") settings = transaction.settings if transaction.settings is not None else global_settings() diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 9c7d9b3081..6c3941fffa 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -41,16 +41,19 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): api_key = getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" - # Get trace information - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") + span_id = None + trace_id = None settings = transaction.settings if transaction.settings is not None else global_settings() function_name = wrapped.__name__ with FunctionTrace(name=function_name, group="Llm/embedding/OpenAI") as ft: + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + try: response = wrapped(*args, **kwargs) except Exception as exc: @@ -154,10 +157,8 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): api_key = getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" - # Get trace information - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") + span_id = None + trace_id = None # Get conversation ID off of the transaction custom_attrs_dict = transaction._custom_params @@ -170,6 +171,11 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): function_name = wrapped.__name__ with FunctionTrace(name=function_name, group="Llm/completion/OpenAI") as ft: + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + try: response = wrapped(*args, **kwargs) except Exception as exc: @@ -426,16 +432,19 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): api_key = getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" - # Get trace information - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") + span_id = None + trace_id = None settings = transaction.settings if transaction.settings is not None else global_settings() function_name = wrapped.__name__ with FunctionTrace(name=function_name, group="Llm/embedding/OpenAI") as ft: + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + try: response = await wrapped(*args, **kwargs) except Exception as exc: @@ -539,10 +548,8 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): api_key = getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" - # Get trace information - available_metadata = get_trace_linking_metadata() - span_id = available_metadata.get("span.id", "") - trace_id = available_metadata.get("trace.id", "") + span_id = None + trace_id = None # Get conversation ID off of the transaction custom_attrs_dict = transaction._custom_params @@ -555,6 +562,11 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): function_name = wrapped.__name__ with FunctionTrace(name=function_name, group="Llm/completion/OpenAI") as ft: + # Get trace information + available_metadata = get_trace_linking_metadata() + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + try: response = await wrapped(*args, **kwargs) except Exception as exc: diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py index 5c91ade6c6..c2964676a4 100644 --- a/tests/external_botocore/_test_bedrock_chat_completion.py +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -14,7 +14,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", "api_key_last_four_digits": "CRET", @@ -39,7 +39,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -58,7 +58,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "\nUse the formula,\n°C = (°F - 32) x 5/9\n= 212 x 5/9\n= 100 degrees Celsius\n212 degrees Fahrenheit is 100 degrees Celsius.", @@ -79,7 +79,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", "response_id": "1234", @@ -102,7 +102,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -121,7 +121,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "\n212 degrees Fahrenheit is equal to 100 degrees Celsius.", @@ -142,7 +142,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", "api_key_last_four_digits": "CRET", @@ -164,7 +164,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "Human: What is 212 degrees Fahrenheit converted to Celsius? Assistant:", @@ -183,7 +183,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": " Okay, here are the conversion steps:\n212 degrees Fahrenheit\n- Subtract 32 from 212 to get 180 (to convert from Fahrenheit to Celsius scale)\n- Multiply by 5/9 (because the formula is °C = (°F - 32) × 5/9)\n- 180 × 5/9 = 100\n\nSo 212 degrees Fahrenheit converted to Celsius is 100 degrees Celsius.", @@ -204,7 +204,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", "response_id": None, # UUID that varies with each run @@ -227,7 +227,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -246,7 +246,7 @@ "appName": "Python Agent Test (external_botocore)", "conversation_id": "my-awesome-id", "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": " 212°F is equivalent to 100°C. \n\nFahrenheit and Celsius are two temperature scales commonly used in everyday life. The Fahrenheit scale is based on 32°F for the freezing point of water and 212°F for the boiling point of water. On the other hand, the Celsius scale uses 0°C and 100°C as the freezing and boiling points of water, respectively. \n\nTo convert from Fahrenheit to Celsius, we subtract 32 from the Fahrenheit temperature and multiply the result", diff --git a/tests/external_botocore/_test_bedrock_embeddings.py b/tests/external_botocore/_test_bedrock_embeddings.py index 2367f7af81..c47d6692a5 100644 --- a/tests/external_botocore/_test_bedrock_embeddings.py +++ b/tests/external_botocore/_test_bedrock_embeddings.py @@ -11,7 +11,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "This is an embedding test.", "api_key_last_four_digits": "CRET", @@ -33,7 +33,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (external_botocore)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "This is an embedding test.", "api_key_last_four_digits": "CRET", diff --git a/tests/external_botocore/conftest.py b/tests/external_botocore/conftest.py index 38c2fb03d1..0df606b55b 100644 --- a/tests/external_botocore/conftest.py +++ b/tests/external_botocore/conftest.py @@ -26,7 +26,6 @@ collector_available_fixture, ) -from newrelic.api.time_trace import current_trace from newrelic.api.transaction import current_transaction from newrelic.common.object_wrapper import wrap_function_wrapper from newrelic.common.package_version_utils import ( @@ -157,8 +156,5 @@ def _set_trace_info(): if txn: txn.guid = "transaction-id" txn._trace_id = "trace-id" - trace = current_trace() - if trace: - trace.guid = "span-id" return _set_trace_info diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index b3511235af..15518aa1a7 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -28,7 +28,6 @@ collector_available_fixture, ) -from newrelic.api.time_trace import current_trace from newrelic.api.transaction import current_transaction from newrelic.common.object_wrapper import wrap_function_wrapper @@ -58,9 +57,6 @@ def set_info(): if txn: txn.guid = "transaction-id" txn._trace_id = "trace-id" - trace = current_trace() - if trace: - trace.guid = "span-id" return set_info diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index ec871b9476..4e582f4638 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -41,7 +41,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "api_key_last_four_digits": "sk-CRET", @@ -75,7 +75,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "You are a scientist.", @@ -94,7 +94,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -113,7 +113,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", @@ -157,7 +157,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", "api_key_last_four_digits": "sk-CRET", @@ -191,7 +191,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "You are a scientist.", @@ -210,7 +210,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -229,7 +229,7 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", "request_id": "49dbbffbd3c3f4612aa48def69059ccd", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", diff --git a/tests/mlmodel_openai/test_chat_completion_error.py b/tests/mlmodel_openai/test_chat_completion_error.py index 812e7166e3..fe298c02bb 100644 --- a/tests/mlmodel_openai/test_chat_completion_error.py +++ b/tests/mlmodel_openai/test_chat_completion_error.py @@ -46,7 +46,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", "conversation_id": "my-awesome-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "api_key_last_four_digits": "sk-CRET", "duration": None, # Response time varies each test run @@ -67,7 +67,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "You are a scientist.", @@ -86,7 +86,7 @@ "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -148,7 +148,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", "conversation_id": "my-awesome-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "api_key_last_four_digits": "sk-CRET", "duration": None, # Response time varies each test run @@ -169,7 +169,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "Model does not exist.", @@ -232,7 +232,7 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", "conversation_id": "my-awesome-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "api_key_last_four_digits": "", "duration": None, # Response time varies each test run @@ -253,7 +253,7 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "You are a scientist.", @@ -272,7 +272,7 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "my-awesome-id", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "What is 212 degrees Fahrenheit converted to Celsius?", @@ -333,7 +333,7 @@ def test_chat_completion_authentication_error(monkeypatch, set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", "conversation_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "api_key_last_four_digits": "sk-BEEF", "duration": None, # Response time varies each test run @@ -354,7 +354,7 @@ def test_chat_completion_authentication_error(monkeypatch, set_trace_info): "appName": "Python Agent Test (mlmodel_openai)", "conversation_id": "", "request_id": "", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "transaction_id": "transaction-id", "content": "Invalid API key.", diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py index 24ce067ce9..ae2c048fc2 100644 --- a/tests/mlmodel_openai/test_embeddings.py +++ b/tests/mlmodel_openai/test_embeddings.py @@ -34,7 +34,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "This is an embedding test.", "api_key_last_four_digits": "sk-CRET", diff --git a/tests/mlmodel_openai/test_embeddings_error.py b/tests/mlmodel_openai/test_embeddings_error.py index 97e53e048c..fd6523a47e 100644 --- a/tests/mlmodel_openai/test_embeddings_error.py +++ b/tests/mlmodel_openai/test_embeddings_error.py @@ -39,7 +39,7 @@ "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "This is an embedding test with no model.", "api_key_last_four_digits": "sk-CRET", @@ -100,7 +100,7 @@ def test_embeddings_invalid_request_error_no_model(set_trace_info): "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "Model does not exist.", "api_key_last_four_digits": "sk-CRET", @@ -159,7 +159,7 @@ def test_embeddings_invalid_request_error_invalid_model(set_trace_info): "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "Invalid API key.", "api_key_last_four_digits": "", @@ -216,7 +216,7 @@ def test_embeddings_authentication_error(monkeypatch, set_trace_info): "id": None, # UUID that varies with each run "appName": "Python Agent Test (mlmodel_openai)", "transaction_id": "transaction-id", - "span_id": "span-id", + "span_id": None, "trace_id": "trace-id", "input": "Embedded: Invalid API key.", "api_key_last_four_digits": "sk-BEEF", From 9ff0588516eed17da641b06df2289644e568939e Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Thu, 7 Dec 2023 09:26:56 -0800 Subject: [PATCH 011/199] Refactor Bedrock Error Tracing (#991) * Add notice_error attributes. * --allow-empty * Add chat completion error tracing. * Address test failures. * Fix bug where span_id is incorrect The span_id should be the span id of the function trace, not the span above. * Remove overriding of span-id * Add notice_error attributes. * --allow-empty * Add chat completion error tracing. * Address test failures. * Fixup: merge conflicts * Change vendor name casing. * Fix casing. * Fix transaction naming. --------- Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> --- newrelic/hooks/external_botocore.py | 158 +++++++---- .../_test_bedrock_chat_completion.py | 253 +++++++++++++++--- .../_test_bedrock_embeddings.py | 55 +++- .../test_bedrock_chat_completion.py | 101 ++++--- .../test_bedrock_embeddings.py | 11 +- 5 files changed, 454 insertions(+), 124 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 9f60fccf37..12bdfcafe2 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -64,14 +64,14 @@ def bedrock_error_attributes(exception, request_args, client, extractor): return {} request_body = request_args.get("body", "") - error_attributes = extractor(request_body)[1] + error_attributes = extractor(request_body)[2] error_attributes.update( { "request_id": response.get("ResponseMetadata", {}).get("RequestId", ""), "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], "request.model": request_args.get("modelId", ""), - "vendor": "Bedrock", + "vendor": "bedrock", "ingest_source": "Python", "http.statusCode": response.get("ResponseMetadata", "").get("HTTPStatusCode", ""), "error.message": response.get("Error", "").get("Message", ""), @@ -84,7 +84,8 @@ def bedrock_error_attributes(exception, request_args, client, extractor): def create_chat_completion_message_event( transaction, app_name, - message_list, + input_message_list, + output_message_list, chat_completion_id, span_id, trace_id, @@ -96,7 +97,33 @@ def create_chat_completion_message_event( if not transaction: return - for index, message in enumerate(message_list): + for index, message in enumerate(input_message_list): + if response_id: + id_ = "%s-%d" % (response_id, index) # Response ID was set, append message index to it. + else: + id_ = str(uuid.uuid4()) # No response IDs, use random UUID + + chat_completion_message_dict = { + "id": id_, + "appName": app_name, + "conversation_id": conversation_id, + "request_id": request_id, + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction.guid, + "content": message.get("content", ""), + "role": message.get("role"), + "completion_id": chat_completion_id, + "sequence": index, + "response.model": request_model, + "vendor": "bedrock", + "ingest_source": "Python", + } + transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) + + for index, message in enumerate(output_message_list): + index += len(input_message_list) + if response_id: id_ = "%s-%d" % (response_id, index) # Response ID was set, append message index to it. else: @@ -117,6 +144,7 @@ def create_chat_completion_message_event( "response.model": request_model, "vendor": "bedrock", "ingest_source": "Python", + "is_response": True } transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) @@ -128,9 +156,13 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): request_config = request_body.get("textGenerationConfig", {}) + input_message_list = [{"role": "user", "content": request_body.get("inputText", "")}] + + chat_completion_summary_dict = { "request.max_tokens": request_config.get("maxTokenCount", ""), "request.temperature": request_config.get("temperature", ""), + "response.number_of_messages": len(input_message_list), } if response_body: @@ -138,10 +170,7 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): completion_tokens = sum(result["tokenCount"] for result in response_body.get("results", [])) total_tokens = input_tokens + completion_tokens - message_list = [{"role": "user", "content": request_body.get("inputText", "")}] - message_list.extend( - {"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", []) - ) + output_message_list = [{"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", [])] chat_completion_summary_dict.update( { @@ -149,18 +178,18 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): "response.usage.completion_tokens": completion_tokens, "response.usage.prompt_tokens": input_tokens, "response.usage.total_tokens": total_tokens, - "response.number_of_messages": len(message_list), + "response.number_of_messages": len(input_message_list) + len(output_message_list), } ) else: - message_list = [] + output_message_list = [] - return message_list, chat_completion_summary_dict + return input_message_list, output_message_list, chat_completion_summary_dict def extract_bedrock_titan_embedding_model(request_body, response_body=None): if not response_body: - return [], {} # No extracted information necessary for embedding + return [], [], {} # No extracted information necessary for embedding request_body = json.loads(request_body) response_body = json.loads(response_body) @@ -172,7 +201,7 @@ def extract_bedrock_titan_embedding_model(request_body, response_body=None): "response.usage.prompt_tokens": input_tokens, "response.usage.total_tokens": input_tokens, } - return [], embedding_dict + return [], [], embedding_dict def extract_bedrock_ai21_j2_model(request_body, response_body=None): @@ -180,28 +209,28 @@ def extract_bedrock_ai21_j2_model(request_body, response_body=None): if response_body: response_body = json.loads(response_body) + input_message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + chat_completion_summary_dict = { "request.max_tokens": request_body.get("maxTokens", ""), "request.temperature": request_body.get("temperature", ""), + "response.number_of_messages": len(input_message_list), } if response_body: - message_list = [{"role": "user", "content": request_body.get("prompt", "")}] - message_list.extend( - {"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", []) - ) + output_message_list =[{"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", [])] chat_completion_summary_dict.update( { "response.choices.finish_reason": response_body["completions"][0]["finishReason"]["reason"], - "response.number_of_messages": len(message_list), + "response.number_of_messages": len(input_message_list) + len(output_message_list), "response_id": str(response_body.get("id", "")), } ) else: - message_list = [] + output_message_list = [] - return message_list, chat_completion_summary_dict + return input_message_list, output_message_list, chat_completion_summary_dict def extract_bedrock_claude_model(request_body, response_body=None): @@ -209,27 +238,27 @@ def extract_bedrock_claude_model(request_body, response_body=None): if response_body: response_body = json.loads(response_body) + input_message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + chat_completion_summary_dict = { "request.max_tokens": request_body.get("max_tokens_to_sample", ""), "request.temperature": request_body.get("temperature", ""), + "response.number_of_messages": len(input_message_list) } if response_body: - message_list = [ - {"role": "user", "content": request_body.get("prompt", "")}, - {"role": "assistant", "content": response_body.get("completion", "")}, - ] + output_message_list = [{"role": "assistant", "content": response_body.get("completion", "")}] chat_completion_summary_dict.update( { "response.choices.finish_reason": response_body.get("stop_reason", ""), - "response.number_of_messages": len(message_list), + "response.number_of_messages": len(input_message_list) + len(output_message_list), } ) else: - message_list = [] + output_message_list = [] - return message_list, chat_completion_summary_dict + return input_message_list, output_message_list, chat_completion_summary_dict def extract_bedrock_cohere_model(request_body, response_body=None): @@ -237,30 +266,27 @@ def extract_bedrock_cohere_model(request_body, response_body=None): if response_body: response_body = json.loads(response_body) + input_message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + chat_completion_summary_dict = { "request.max_tokens": request_body.get("max_tokens", ""), "request.temperature": request_body.get("temperature", ""), + "response.number_of_messages": len(input_message_list) } if response_body: - message_list = [{"role": "user", "content": request_body.get("prompt", "")}] - message_list.extend( - {"role": "assistant", "content": result["text"]} for result in response_body.get("generations", []) - ) - + output_message_list = [{"role": "assistant", "content": result["text"]} for result in response_body.get("generations", [])] chat_completion_summary_dict.update( { - "request.max_tokens": request_body.get("max_tokens", ""), - "request.temperature": request_body.get("temperature", ""), "response.choices.finish_reason": response_body["generations"][0]["finish_reason"], - "response.number_of_messages": len(message_list), + "response.number_of_messages": len(input_message_list) + len(output_message_list), "response_id": str(response_body.get("id", "")), } ) else: - message_list = [] + output_message_list = [] - return message_list, chat_completion_summary_dict + return input_message_list, output_message_list, chat_completion_summary_dict MODEL_EXTRACTORS = [ # Order is important here, avoiding dictionaries @@ -294,6 +320,8 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): if not model: return wrapped(*args, **kwargs) + is_embedding = model.startswith("amazon.titan-embed") + # Determine extractor by model type for extractor_name, extractor in MODEL_EXTRACTORS: if model.startswith(extractor_name): @@ -309,7 +337,10 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): ) UNSUPPORTED_MODEL_WARNING_SENT = True - extractor = lambda *args: ([], {}) # Empty extractor that returns nothing + extractor = lambda *args: ([], [], {}) # Empty extractor that returns nothing + + span_id = None + trace_id = None span_id = None trace_id = None @@ -329,9 +360,32 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): try: error_attributes = extractor(request_body) error_attributes = bedrock_error_attributes(exc, kwargs, instance, extractor) + notice_error_attributes = { + "http.statusCode": error_attributes["http.statusCode"], + "error.message": error_attributes["error.message"], + "error.code": error_attributes["error.code"] + } + + if is_embedding: + notice_error_attributes.update({"embedding_id": str(uuid.uuid4())}) + else: + notice_error_attributes.update({"completion_id": str(uuid.uuid4())}) + ft.notice_error( - attributes=error_attributes, + attributes=notice_error_attributes, ) + + if operation == "embedding": # Only available embedding models + handle_embedding_event( + instance, transaction, extractor, model, None, None, request_body, + ft.duration, True, trace_id, span_id + ) + else: + handle_chat_completion_event( + instance, transaction, extractor, model, None, None, request_body, + ft.duration, True, trace_id, span_id + ) + finally: raise @@ -343,7 +397,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): response["body"] = StreamingBody(BytesIO(response_body), len(response_body)) response_headers = response["ResponseMetadata"]["HTTPHeaders"] - if model.startswith("amazon.titan-embed"): # Only available embedding models + if operation == "embedding": # Only available embedding models handle_embedding_event( instance, transaction, @@ -353,6 +407,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): response_headers, request_body, ft.duration, + False, trace_id, span_id, ) @@ -366,6 +421,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): response_headers, request_body, ft.duration, + False, trace_id, span_id, ) @@ -374,14 +430,16 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): def handle_embedding_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, trace_id, span_id + client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id ): embedding_id = str(uuid.uuid4()) - request_id = response_headers.get("x-amzn-requestid", "") + request_id = response_headers.get("x-amzn-requestid", "") if response_headers else "" settings = transaction.settings if transaction.settings is not None else global_settings() - _, embedding_dict = extractor(request_body, response_body) + _, _, embedding_dict = extractor(request_body, response_body) + + request_body = json.loads(request_body) embedding_dict.update( { @@ -392,6 +450,7 @@ def handle_embedding_event( "span_id": span_id, "trace_id": trace_id, "request_id": request_id, + "input": request_body.get("inputText", ""), "transaction_id": transaction.guid, "api_key_last_four_digits": client._request_signer._credentials.access_key[-4:], "duration": duration, @@ -399,22 +458,24 @@ def handle_embedding_event( "response.model": model, } ) + if is_error: + embedding_dict.update({"error": True}) transaction.record_custom_event("LlmEmbedding", embedding_dict) def handle_chat_completion_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, trace_id, span_id + client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id ): custom_attrs_dict = transaction._custom_params conversation_id = custom_attrs_dict.get("conversation_id", "") chat_completion_id = str(uuid.uuid4()) - request_id = response_headers.get("x-amzn-requestid", "") + request_id = response_headers.get("x-amzn-requestid", "") if response_headers else "" settings = transaction.settings if transaction.settings is not None else global_settings() - message_list, chat_completion_summary_dict = extractor(request_body, response_body) + input_message_list, output_message_list, chat_completion_summary_dict = extractor(request_body, response_body) response_id = chat_completion_summary_dict.get("response_id", "") chat_completion_summary_dict.update( { @@ -433,13 +494,16 @@ def handle_chat_completion_event( "response.model": model, # Duplicate data required by the UI } ) + if is_error: + chat_completion_summary_dict.update({"error": True}) transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) create_chat_completion_message_event( transaction=transaction, app_name=settings.app_name, - message_list=message_list, + input_message_list=input_message_list, + output_message_list=output_message_list, chat_completion_id=chat_completion_id, span_id=span_id, trace_id=trace_id, diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py index c2964676a4..e3f53fd31f 100644 --- a/tests/external_botocore/_test_bedrock_chat_completion.py +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -68,6 +68,7 @@ "response.model": "amazon.titan-text-express-v1", "vendor": "bedrock", "ingest_source": "Python", + "is_response": True, }, ), ], @@ -131,6 +132,7 @@ "response.model": "ai21.j2-mid-v1", "vendor": "bedrock", "ingest_source": "Python", + "is_response": True, }, ), ], @@ -193,6 +195,7 @@ "response.model": "anthropic.claude-instant-v1", "vendor": "bedrock", "ingest_source": "Python", + "is_response": True, }, ), ], @@ -256,6 +259,224 @@ "response.model": "cohere.command-text-v14", "vendor": "bedrock", "ingest_source": "Python", + "is_response": True, + }, + ), + ], +} + +chat_completion_invalid_model_error_events = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": None, + "trace_id": "trace-id", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "does-not-exist", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 1, + "vendor": "bedrock", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), +] + +chat_completion_invalid_access_key_error_events = { + "amazon.titan-text-express-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "amazon.titan-text-express-v1", + "response.model": "amazon.titan-text-express-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid Token", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "amazon.titan-text-express-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "ai21.j2-mid-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "ai21.j2-mid-v1", + "response.model": "ai21.j2-mid-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid Token", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "ai21.j2-mid-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "anthropic.claude-instant-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "anthropic.claude-instant-v1", + "response.model": "anthropic.claude-instant-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Human: Invalid Token Assistant:", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "anthropic.claude-instant-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], + "cohere.command-text-v14": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "cohere.command-text-v14", + "response.model": "cohere.command-text-v14", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid Token", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "cohere.command-text-v14", + "vendor": "bedrock", + "ingest_source": "Python", }, ), ], @@ -263,53 +484,21 @@ chat_completion_expected_client_errors = { "amazon.titan-text-express-v1": { - "conversation_id": "my-awesome-id", - "request_id": "15b39c8b-8e85-42c9-9623-06720301bda3", - "api_key_last_four_digits": "-KEY", - "request.model": "amazon.titan-text-express-v1", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", }, "ai21.j2-mid-v1": { - "conversation_id": "my-awesome-id", - "request_id": "9021791d-3797-493d-9277-e33aa6f6d544", - "api_key_last_four_digits": "-KEY", - "request.model": "ai21.j2-mid-v1", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", }, "anthropic.claude-instant-v1": { - "conversation_id": "my-awesome-id", - "request_id": "37396f55-b721-4bae-9461-4c369f5a080d", - "api_key_last_four_digits": "-KEY", - "request.model": "anthropic.claude-instant-v1", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", }, "cohere.command-text-v14": { - "conversation_id": "my-awesome-id", - "request_id": "22476490-a0d6-42db-b5ea-32d0b8a7f751", - "api_key_last_four_digits": "-KEY", - "request.model": "cohere.command-text-v14", - "request.temperature": 0.7, - "request.max_tokens": 100, - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", diff --git a/tests/external_botocore/_test_bedrock_embeddings.py b/tests/external_botocore/_test_bedrock_embeddings.py index c47d6692a5..ec677b426c 100644 --- a/tests/external_botocore/_test_bedrock_embeddings.py +++ b/tests/external_botocore/_test_bedrock_embeddings.py @@ -50,23 +50,58 @@ ], } +embedding_expected_error_events = { + "amazon.titan-embed-text-v1": [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "Invalid Token", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "amazon.titan-embed-text-v1", + "response.model": "amazon.titan-embed-text-v1", + "request_id": "", + "vendor": "bedrock", + "ingest_source": "Python", + "error": True + }, + ), + ], + "amazon.titan-embed-g1-text-02": [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "Invalid Token", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "amazon.titan-embed-g1-text-02", + "response.model": "amazon.titan-embed-g1-text-02", + "request_id": "", + "vendor": "bedrock", + "ingest_source": "Python", + "error": True + }, + ), + ], +} + embedding_expected_client_errors = { "amazon.titan-embed-text-v1": { - "request_id": "aece6ad7-e2ff-443b-a953-ba7d385fd0cc", - "api_key_last_four_digits": "-KEY", - "request.model": "amazon.titan-embed-text-v1", - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", }, "amazon.titan-embed-g1-text-02": { - "request_id": "73328313-506e-4da8-af0f-51017fa6ca3f", - "api_key_last_four_digits": "-KEY", - "request.model": "amazon.titan-embed-g1-text-02", - "vendor": "Bedrock", - "ingest_source": "Python", "http.statusCode": 403, "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index e8cb2d985e..604771c824 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -21,7 +21,9 @@ from _test_bedrock_chat_completion import ( chat_completion_expected_client_errors, chat_completion_expected_events, + chat_completion_invalid_access_key_error_events, chat_completion_payload_templates, + chat_completion_invalid_access_key_error_events, ) from conftest import BOTOCORE_VERSION from testing_support.fixtures import ( @@ -87,6 +89,11 @@ def expected_events(model_id): return chat_completion_expected_events[model_id] +@pytest.fixture(scope="module") +def expected_invalid_access_key_error_events(model_id): + return chat_completion_invalid_access_key_error_events[model_id] + + @pytest.fixture(scope="module") def expected_events_no_convo_id(model_id): events = copy.deepcopy(chat_completion_expected_events[model_id]) @@ -180,51 +187,79 @@ def test_bedrock_chat_completion_disabled_settings(set_trace_info, exercise_mode _client_error_name = callable_name(_client_error) -@validate_error_trace_attributes( - "botocore.errorfactory:ValidationException", - exact_attrs={ - "agent": {}, - "intrinsic": {}, - "user": { +chat_completion_invalid_model_error_events = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "transaction_id": "transaction-id", "conversation_id": "my-awesome-id", - "request_id": "f4908827-3db9-4742-9103-2bbc34578b03", + "span_id": None, + "trace_id": "trace-id", "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run "request.model": "does-not-exist", - "vendor": "Bedrock", + "response.model": "does-not-exist", + "request_id": "", + "vendor": "bedrock", "ingest_source": "Python", - "http.statusCode": 400, - "error.message": "The provided model identifier is invalid.", - "error.code": "ValidationException", + "error": True, }, - }, -) -@validate_transaction_metrics( - name="test_bedrock_chat_completion:test_bedrock_chat_completion_error_invalid_model", - scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], - rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], - custom_metrics=[ - ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), - ], - background_task=True, -) -@background_task() + ), +] + + +@reset_core_stats_engine() def test_bedrock_chat_completion_error_invalid_model(bedrock_server, set_trace_info): - set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") - with pytest.raises(_client_error): - bedrock_server.invoke_model( - body=b"{}", - modelId="does-not-exist", - accept="application/json", - contentType="application/json", - ) + @validate_custom_events(chat_completion_invalid_model_error_events) + @validate_error_trace_attributes( + "botocore.errorfactory:ValidationException", + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 400, + "error.message": "The provided model identifier is invalid.", + "error.code": "ValidationException", + }, + }, + ) + @validate_transaction_metrics( + name="test_bedrock_chat_completion_error_invalid_model", + scoped_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + rollup_metrics=[("Llm/completion/Bedrock/invoke_model", 1)], + custom_metrics=[ + ("Python/ML/Bedrock/%s" % BOTOCORE_VERSION, 1), + ], + background_task=True, + ) + @background_task(name="test_bedrock_chat_completion_error_invalid_model") + def _test(): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + with pytest.raises(_client_error): + bedrock_server.invoke_model( + body=b"{}", + modelId="does-not-exist", + accept="application/json", + contentType="application/json", + ) + + _test() @dt_enabled @reset_core_stats_engine() def test_bedrock_chat_completion_error_incorrect_access_key( - monkeypatch, bedrock_server, exercise_model, set_trace_info, expected_client_error + monkeypatch, + bedrock_server, + exercise_model, + set_trace_info, + expected_client_error, + expected_invalid_access_key_error_events, ): + @validate_custom_events(expected_invalid_access_key_error_events) @validate_error_trace_attributes( _client_error_name, exact_attrs={ diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py index d2353d94eb..7a5740e465 100644 --- a/tests/external_botocore/test_bedrock_embeddings.py +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -1,4 +1,4 @@ -# Copyright 2010 New Relic, Inc. + # Copyright 2010 New Relic, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -20,6 +20,7 @@ from _test_bedrock_embeddings import ( embedding_expected_client_errors, embedding_expected_events, + embedding_expected_error_events, embedding_payload_templates, ) from conftest import BOTOCORE_VERSION @@ -85,6 +86,11 @@ def expected_events(model_id): return embedding_expected_events[model_id] +@pytest.fixture(scope="module") +def expected_error_events(model_id): + return embedding_expected_error_events[model_id] + + @pytest.fixture(scope="module") def expected_client_error(model_id): return embedding_expected_client_errors[model_id] @@ -140,8 +146,9 @@ def test_bedrock_embedding_disabled_settings(set_trace_info, exercise_model): @dt_enabled @reset_core_stats_engine() def test_bedrock_embedding_error_incorrect_access_key( - monkeypatch, bedrock_server, exercise_model, set_trace_info, expected_client_error + monkeypatch, bedrock_server, exercise_model, set_trace_info, expected_error_events, expected_client_error ): + @validate_custom_events(expected_error_events) @validate_error_trace_attributes( _client_error_name, exact_attrs={ From ff8d373ba57d5d2ba073bbcd9233c76d57ebdb11 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Fri, 8 Dec 2023 17:43:50 -0800 Subject: [PATCH 012/199] Add openai v1 test infrastructure (#1000) * Add chat completion & header instrumentation Co-authored-by: Uma Annamalai * Add support for v1 mock server Co-authored-by: Uma Annamalai * Add openai1.0 tests Co-authored-by: Uma Annamalai * Trigger tests --------- Co-authored-by: Uma Annamalai --- newrelic/config.py | 6 + newrelic/hooks/mlmodel_openai.py | 33 ++- .../_mock_external_openai_server.py | 180 ++++++++++--- tests/mlmodel_openai/conftest.py | 238 +++++++++++++----- .../mlmodel_openai/test_chat_completion_v1.py | 36 +++ tests/mlmodel_openai/test_embeddings_v1.py | 26 ++ tox.ini | 6 +- 7 files changed, 420 insertions(+), 105 deletions(-) create mode 100644 tests/mlmodel_openai/test_chat_completion_v1.py create mode 100644 tests/mlmodel_openai/test_embeddings_v1.py diff --git a/newrelic/config.py b/newrelic/config.py index 23e839a1e7..f9f2fedcb7 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -2053,6 +2053,12 @@ def _process_module_builtin_defaults(): "newrelic.hooks.mlmodel_openai", "instrument_openai_util", ) + _process_module_definition( + "openai._base_client", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_base_client", + ) + _process_module_definition( "asyncio.base_events", "newrelic.hooks.coroutines_asyncio", diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 6c3941fffa..5b3857d0ee 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -236,7 +236,7 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): response_headers = getattr(response, "_nr_response_headers", None) response_model = response.get("model", "") response_id = response.get("id") - request_id = response_headers.get("x-request-id", "") + request_id = response_headers.get("x-request-id", "") if response_headers else "" response_usage = response.get("usage", {}) @@ -380,7 +380,7 @@ def create_chat_completion_message_event( if output_message_list: # Loop through all output messages received from the LLM response and emit a custom event for each one for index, message in enumerate(output_message_list): - message_content = message.get("content", "") + message_content = getattr(message, "content", "") # Add offset of input_message_length so we don't receive any duplicate index values that match the input message IDs index += len(input_message_list) @@ -403,7 +403,7 @@ def create_chat_completion_message_event( "trace_id": trace_id, "transaction_id": transaction.guid, "content": message_content, - "role": message.get("role", ""), + "role": getattr(message, "role", ""), "completion_id": chat_completion_id, "sequence": index, "response.model": response_model if response_model else "", @@ -627,7 +627,7 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): response_headers = getattr(response, "_nr_response_headers", None) response_model = response.get("model", "") response_id = response.get("id") - request_id = response_headers.get("x-request-id", "") + request_id = response_headers.get("x-request-id", "") if response_headers else "" response_usage = response.get("usage", {}) @@ -715,6 +715,26 @@ def wrap_convert_to_openai_object(wrapped, instance, args, kwargs): return returned_response +def bind_base_client_process_response( + cast_to, + options, + response, + stream, + stream_cls, +): + return response + + +def wrap_base_client_process_response(wrapped, instance, args, kwargs): + response = bind_base_client_process_response(*args, **kwargs) + nr_response_headers = getattr(response, "headers") + + return_val = wrapped(*args, **kwargs) + + return_val._nr_response_headers = nr_response_headers + return return_val + + def instrument_openai_util(module): wrap_function_wrapper(module, "convert_to_openai_object", wrap_convert_to_openai_object) @@ -731,3 +751,8 @@ def instrument_openai_api_resources_chat_completion(module): wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_create) if hasattr(module.ChatCompletion, "acreate"): wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_acreate) + + +def instrument_openai_base_client(module): + if hasattr(module.BaseClient, "_process_response"): + wrap_function_wrapper(module, "BaseClient._process_response", wrap_base_client_process_response) diff --git a/tests/mlmodel_openai/_mock_external_openai_server.py b/tests/mlmodel_openai/_mock_external_openai_server.py index 44cfb5d0de..6cac9e2a68 100644 --- a/tests/mlmodel_openai/_mock_external_openai_server.py +++ b/tests/mlmodel_openai/_mock_external_openai_server.py @@ -14,8 +14,11 @@ import json +import pytest from testing_support.mock_external_http_server import MockExternalHTTPServer +from newrelic.common.package_version_utils import get_package_version_tuple + # This defines an external server test apps can make requests to instead of # the real OpenAI backend. This provides 3 features: # @@ -27,6 +30,74 @@ # created by an external call. # 3) This app runs on a separate thread meaning it won't block the test app. +RESPONSES_V1 = { + "You are a scientist.": [ + { + "content-type": "application/json", + "openai-model": "gpt-3.5-turbo-0613", + "openai-organization": "foobar-jtbczk", + "openai-processing-ms": "6326", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "40000", + "x-ratelimit-limit-tokens_usage_based": "40000", + "x-ratelimit-remaining-requests": "198", + "x-ratelimit-remaining-tokens": "39880", + "x-ratelimit-remaining-tokens_usage_based": "39880", + "x-ratelimit-reset-requests": "11m32.334s", + "x-ratelimit-reset-tokens": "180ms", + "x-ratelimit-reset-tokens_usage_based": "180ms", + "x-request-id": "f8d0f53b6881c5c0a3698e55f8f410ac", + }, + 200, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug", + "object": "chat.completion", + "created": 1701995833, + "model": "gpt-3.5-turbo-0613", + "choices": [ + { + "index": 0, + "message": { + "role": "assistant", + "content": "To convert 212 degrees Fahrenheit to Celsius, you can use the formula:\n\n\u00b0C = (\u00b0F - 32) x 5/9\n\nSubstituting the value, we get:\n\n\u00b0C = (212 - 32) x 5/9\n = 180 x 5/9\n = 100\n\nTherefore, 212 degrees Fahrenheit is equal to 100 degrees Celsius.", + }, + "finish_reason": "stop", + } + ], + "usage": {"prompt_tokens": 26, "completion_tokens": 82, "total_tokens": 108}, + "system_fingerprint": None, + }, + ], + "This is an embedding test.": [ + { + "content-type": "application/json", + "openai-organization": "foobar-jtbczk", + "openai-processing-ms": "21", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "150000", + "x-ratelimit-remaining-requests": "197", + "x-ratelimit-remaining-tokens": "149993", + "x-ratelimit-reset-requests": "19m5.228s", + "x-ratelimit-reset-tokens": "2ms", + "x-request-id": "fef7adee5adcfb03c083961bdce4f6a4", + }, + 200, + { + "object": "list", + "data": [ + { + "object": "embedding", + "index": 0, + "embedding": "SLewvFF6iztXKj07UOCQO41IorspWOk79KHuu12FrbwjqLe8FCTnvBKqj7sz6bM8qqUEvFSfITpPrJu7uOSbPM8agzyYYqM7YJl/PBF2mryNN967uRiRO9lGcbszcuq7RZIavAnnNLwWA5s8mnb1vG+UGTyqpYS846PGO2M1X7wIxAO8HfgFvc8s8LuQXPQ5qgsKPOinEL15ndY8/MrOu1LRMTxCbQS7PEYJOyMx7rwDJj+79dVjO5P4UzmoPZq8jUgivL36UjzA/Lc8Jt6Ru4bKAL1jRiM70i5VO4neUjwneAy7mlNEPBVpoDuayo28TO2KvAmBrzzwvyy8B3/KO0ZgCry3sKa6QTmPO0a1Szz46Iw87AAcPF0O5DyJVZw8Ac+Yu1y3Pbqzesw8DUDAuq8hQbyALLy7TngmPL6lETxXxLc6TzXSvKJrYLy309c8OHa0OU3NZ7vru2K8mIXUPCxrErxLU5C5s/EVPI+wjLp7BcE74TvcO+2aFrx4A9w80j+Zu/aAojwmzU08k/hTvBpL4rvHFFQ76YftutrxL7wyxgK9BsIevLkYkTq4B028OZnlPPkcgjxhzfS79oCiuB34BbwITTq97nrzOugwRzwGS1U7CqTgvFxROLx4aWG7E/DxPA3J9jwd+AU8dVWPvGlc2jzwWae57nrzu569E72GU7e8Vn9+vFLA7TtVbZE8eOCqPG+3Sjxr5/W8s+DRPE+sm7wFKKQ8A8A5vUSBVryeIxk8hsqAPAeQjryeIxm8gU/tuxVpoDxVXM250GDlOlEDwjs0t6O8Tt6rOVrGHLvmyFy6dhI7PLPxlbv3YP88B/YTPEZgCrxqKsq8Xh+ou96wQLp5rpo8LSg+vL63/rsFjqk8E/DxPEi3MDzTcw66PjcqPNgSfLwqnaK85QuxPI7iHL2+pRE8Z+ICOxzEELvph+07jHqyu2ltnrwNQMC82BL8vAOdiDwSqo88CLM/PCKFBrzmP6a85Nc7PBaM0bvh1VY7NB2pvMkF9Tx3New87mgGPAoKZjo+nS+/Rk/GucqwMz3fwYS8yrCzPMo56jyDHV08XLe9vB4+aLwXwMY8dVUPvCFATbx2eMC8V7NzvEnrpTsIxIO7yVmNu2lc2ryGQnM8A6/1PH/VFbySO6g80i5VPOY/prv6cyi7W5QMPJVP+jsyLIi84H6wPKM50DrZNIS8UEaWPPrIaTzvrmg8rcoaPRuQm7ysH9y8OxIUO7ss4zq3Od08paG6vAPAuTjYAI88/qmCuuROhbzBMK08R4M7u67+j7uClKa6/KedOsqNArzysM08QJ8UvMD8t7v5P7M799fIvAWx2jxiEi48ja6nPL0LFzxFkpq7LAWNPA1AQLyWlLO6qrfxvOGypTxJUau8aJ8uPceLnTtS0TG9omtgPO7xPDvzbfm7FfJWu2CqwzwAASk96FN4PLPgUbwRdhq8Vn9+PLk7wjs8NUW84yx9vHJCZjzysM079hodO/NbDL2BxrY6CE26OzpEpDv7DaM8y0quO41IIr1+Kte8QdMJvKlxDzy9+lI8hfyQPA3J9jzWmKS7z6O5u4a5vLtXKj088XzYO1fEtzwY4/e7Js1NugbCnjymxOu7906SvPSPAb1ieDO8dnjAu/EW0zp/b5C8mGIjvWTPWTwIxIM8YgFqPKvrZrwKpOA7/jK5O2vViDyfaXs8DR2Pu0AFGrvTc446IIOhvDreHrxRnTw8ROdbu55Gyrsht5Y8tVmAvHK5rzzZvTo8bx1QPMglmLvigBU8oIuDvAFYz7pblIw8OZnlOsTvPbxhzfS8BxnFOpkwE72E60w7cNp7utp6ZrtvHdC4uwmyO5dRX7sAm6M7kqEtvElRK7yWg++7JHanvM6ACDvrZqG8Xh+oupQsyTwkZWO8VzuBu5xVKbzEZoc7wB9pvA796zyZlpi8YbsHvQs+W7u9cZy8gKMFOxYDGzyu7Uu71KeDPJxVqbxwyI68VpDCu9VT67xKqFG7KWmtuvNteTocs0w7aJ8uPMUSbzz6cyg8MiwIPEtlfTo+wOA75tkgu7VZgDw8WPa8mGIjPKq38bsr0Zc7Ot4evNNiyju9C5c7YCENPP6pAj3uV8I7X3bOusfxIjvpZLy655bMvL9ivbxO3iu8NKbfPNe7VTz9ZMk88RZTu5QsybxeQtk7qpTAOzGSjTxSwO27mGIjPO7OC7x7FoW8wJayvI2uJzttxqk84H4wOUtlfbxblAw8uTtCPIO3Vzxkz9k8ENwfvfQYuLvHFNQ8LvatPF65ojzPLHA8+RyCvK3Kmjx27wk8Dcn2PARatDv3tBc8hkLzPEOz5jyQSoe8gU/tPMRmhzzp2wU90shPPBv2oLsNQMA8jTdevIftMTt/Xsw7MMQdPICjBT012tS7SLewvJBtuDuevZM8LyojPa6HxjtOAd07v9mGusZXqDoPqKo8qdeUvETnW7y5occ5pOSOvPPkwjsDN4O8Mk85vKnXlDtp06O7kZDpO6GuNDtRFAY9lAkYPGHNdDx2Afc7RRtROy5/5LyUoxI9mu0+u/dOEryrYrC867vivJp29TtVbZG8SVGrO0im7LnhsqU80frfPL/IwryBT+07/+/kPLZ8sTwoNbg7ZkiIOxadlbxlnUm68RbTuxkX7Tu/cwG7aqGTPO8CAbzTYsq6AIpfvA50tbzllOc7s3rMO0SBVjzXzJm8eZ3Wu4vgtzwPDrA8W6b5uwJpEzwLtaQ81pgkPJuqarxmro288369u48WkjwREBU9JP/dPJ69kzvw4t27h3bouxhrBbwrNx29F9EKPFmSJ7v8px08Tt6rvEJthLxon648UYz4u61TUTz4lPQ7ERAVuhwqFrzfSjs8RRtRO6lxD7zHelm87lfCu10O5LrXMh886YftvL9iPTxCf/E6MZKNOmAhDb2diZ47eRSgPBfRCrznlsw5MiwIvHW7FD3tI807uG3SPE7eqzx1VY864TtcO3zTMDw7EhS8c+0kPLr47TvUDQm8domEvEi3MLruaAa7tUi8u4FgsTwbkBu6pQfAvEJthLwDnQg8S1OQO55GSrxZLCK8nkZKvFXTFr01dM+8W6Z5vO+u6Luh0eW8rofGvFsdw7x7KHK8sN5svCFAzbo/0SS8f9UVu7Qli7wr0Re95E4FvSg1ODok/907AAGpPHQhGrwtS++71pgkvCtazjsSzcC7exYFPLVZgLzZmom7W6Z5PHr0fLtn9O86oUivukvcRrzjPcE8a8REPAei+zoBNZ685aUrPNBg5bqeIxk8FJuwPPdOkrtUOZy8GRftO4KD4rz/72Q7ERCVu8WJODy5O8I5L7NZuxJECjxFkpq8Uq4AOy2fh7wY9Du8GRdtu48o/7mHdug803MOvCUQIrw2hZM8v+tzvE54pruyI6a6exYFvDXrGDwNQEA8zyxwO7c53TwUJGe8Wk9Tu6ouu7yqCwo8vi7IvNe71TxB04m8domEvKTkDrzsidK8+nOovLfT1zr11eM7SVErO3EOcbzqMqw74Tvcut4WRrz5pbi8oznQvMi/Er0aS+I87lfCvK+qdztd6zI83eJQPFy3vbyACQu9/8wzO/k/s7weG7e8906SPA3J9jw8NUU8TUQxPfEWU7wjH4E8J3gMPC72LTp6SJU8exaFOXBiibyf4MS6EXYaO3DIjjy61by7ACRaO5NvnTvMGB48Dw6wPFEUBr30j4E7niMZvIZC87s7EpS8OZnlPJZxgrxug9U7/DDUvNrxL7yV14e3E2c7PBdaQTwT8HE8oIuDPGIB6rvMB9o6cR+1OwbCHrylfgm8z6M5vIiqXbxFG1G8a9WIPItp7rpGT8Y838GEvAoK5jyAG3g7xRJvPPxBGLzJWQ28XYWtO85vRLp0IZq8cR81vc7mDb28PSe89LKyuig1uDyxEuK8GlwmPIbKgLwHGcW7/qkCvC8ZXzzSyE89F8BGOxPw8Tx+Ktc8BkvVurXiNryRkOk8jyj/OcKH0zp69Pw8apDPPFuUjLwPDrC8xuBeuD43KrxuYKQ7qXGPvF0OZDx1VQ88VVzNvD9rn7ushWE7EZlLvSL9+DrHi528dzXsu3k30bzeFka7hrm8vD3gAz1/Xsy80D20PNPZE7sorAG86WS8u2Y3xDtvHVC7PKwOO5DkAT3KOeo8c+0kvI+fyLuY61k8SKbsO4TrzLrrZqE87O9XvMkF9Tynb6q847SKvBjjdzyhSK88zTtPPNNzjjsvGV87UQPCvMD8t7stn4e7GRftPBQkZ7x4eiW7sqzcu3ufO7yAG3g8OHa0u0T4n7wcxJC7r6r3vAbCnrth3rg7BxnFumqQzzyXyCi8V8Q3vEPEqjyIu6E8Ac+YvGR6GLulkHY8um83PMqNgrv5pTi8N7kIPOhTeLy6TIY8B5COvDLGArvEzAy9IbcWvIUfQjxQ4BC7B/aTvCfwfrz15ie8ucR4PD1pursLtSS8AgMOOzIsiLv0srI7Q01hPCvRF7vySsg6O5tKunh6JTvCZCI7xuDevLc53btvLhQ8/pi+PJU9Dbugi4O8Qn/xvLpMhrth3ji8n/GIPKouu7tBS3y853MbPGAQyTt27wk7iokRO8d62bzZRnG7sN5svAG+1Lqvqve8JGXjur0Ll7tCf/E75/xRPIWFx7wgDNi8ucT4OZNvHb2nktu8qrfxuyR2J7zWh2A6juKcPDhlcLx/1RU9IAxYPGJ4szylB8C8qfrFO276HjuWcQK9QdOJvCUQIjzjo8a8SeslvBrCKztCf/E66MrBOx1eCz2Xt+Q66YdtvKg9mrrLSq47fFznO1uUjDsoNTg8QyqwuzH4Ejz/Zi67A8A5uKg9GrtFkhq862ahOzSmXzkMDEs8q+vmvNVkLzwc1n28mu0+vCbekTyCg+K7ekgVvO8CAT2yRtc8apBPu1b2R7zUp4M8VW2RvPc9zrx69Hw753ObvCcSB71sG+u8OwHQuv67b7zLSi65HrWxO0ZPRrxmwPq7t7CmPGxvAzygnfC8oIsDvKY7tbwZF+07p2+qvOnbhbv0oW47/2auuThlcDwIxIM8n/EIO6ijH7vHetk7uRiRPGUDT7pgh5I85shcPpGQabykShS7FWmgPPjojDvJ8wc8mlPEOY2uJzt7FoW7HNb9O7rVvDzKjQI80NcuuqvINbvNTBO8TgFdvEJ/cbzEZoe8SVGrvMvkqLyHdui7P2ufvBSbMDw0t6O82GaUPOLmGrxSNze8KVjpuwizPzwqjN48Xh8ovE4B3TtiAeo8azsOO8eLnbyO4py7x/GiPIvgNzzvi7c8BFq0O/dOEj1fU5282ZoJPCL9+LqyIyY8IoUGPNI/mbwKpGC7EkQKuzrN2jwVzyU7QpA1vLIjpjwi64s8HYE8u6eSW7yryLU8yK5OOzysjjwi6wu8GsIrOu7xPDwCaRO8dzVsPP/vZLwT3oQ8cQ7xvOJv0TtWBww8hlM3PBPeBDxT9OK71pgkPPSysrugiwO90GDlvHOHHz3xfNg8904SPVpglzzmP6a7Cgrmu9/BBLyH7bG85QsxvVSfIb2Xt2Q8paG6vOqYsTos9Mi8nqxPu8wHWjuYhdS7GAWAvCIOvTp/bxA8j7CMPG1P4Dxd67I7xxRUvOM9wbxMhwU9Kp0iPfF82LvQYOU6XkJZPBxNx7y0nX28B5COO8FT3rp4eiW8R/oEvSfw/jtC9rq8n/GIux3nQTw8WPY8LBf6uzSmXzzSPxm88rDNvDysDjwyPnW7tdFyPBLNwDo8WHa8bPi5vOO0CrylGAQ8YgFqvEFLfDy7LOO7TIeFPAHPmDv3YP+6/+9kPBKqjzt5rpo8VJ+hvE7eKzyc3t88P2sfvLQUR7wJ1vC6exaFvD6dr7zNO888i+A3ulwuhzuF/JC8gKMFveoyLLxqBxk7YgFquws+2zwOUYS8agcZvGJ4M71AjtC747QKvAizP73UH3a7LvatPJBtuLzEzIy8bG8DvJEHM75E59s7zbIYPObZIL2uZJW7WRveugblTzy6TIa802JKvD9rH7xlA088QAWavIFP7bwL2FW8vqWRu0ZgijyRkGm7ZGnUvIeHLD1c2m48THbBPPkcAr1NzWc8+JT0uulkvLvXMp+7lU96u7kYET1xhTo8e3wKvItGPTxb+hG87mgGPWqhk7uhrrQ73rBAPCbNTT13rDW8K8DTus8s8DsNt4k8gpQmPLES4ryyvSA8lcbDO60woDyLVwE9BFq0u+cNFj3C7Vi8UXoLPDYOyryQ0z083+S1Ox34hTzEzIw7pX4Ju6ouuzxIpmw8w5iXuylYaTy5sgu9Js3NOo+fyLyjFp+8MMSdvOROBb2n+OA7b7fKOeIJzDoNpkW8WsYct7SdfTxXxLc7TO2KO3YB9zynktu7OkSkPKnXFLvtRv47AJujuzGSDT0twjg8AgOOO4d26DvpZDy8lAkYPI5r0zcGS9W8OGXwu9xIVjyH7TG9IUDNuiqMXrwb9qA79I+BPL1xHLuVPY07MOfOO0ztCruvMoW8BuXPu4AbeLyIRNg8uG3SPO5XQjuFH0K8zm9EPEAoSz0tKL652ZqJOgABqbwsjsM8mlPEPLewpjsVWNw8OGXwOlYHjLzfwQQ81iFbOyJ0Qj3d85S7cQ7xvIqswjxKhSC7906SvAFYz72xiau8LAWNPB1eCz09jGu72ZoJPfDiXTwPDrA8CYGvvNH6XzxTa6y8+RwCvY8of7xxDnG8Ef/QvJ9p+zqh0eU8a16/OzBN1LyDLiE9PFh2u+0jTbxLUxA9ZZ3JvItXgbqL4Dc8BuXPvKnXFDzmPyY8k/hTOlum+bqAksG8OZnluPmluLxRnTy6/KcdvKAUOrzRcSm8fqEgPcTeebzeOXc8KCR0OnN2W7xRA0K8Wsacu+M9wToyLIi8mTATu21P4LuadvW8Dtq6vPmlODsjqLe88ieXPJEHszySoa08U/RiPNQNCbwb9qC8bG+DOXW7FL0OdLW7Tc3nvG8dULsAJNo7fNMwO7sJMr2O4hy85ZTnuwAkWjw+Nyq8rcoaO+8lsrvx86E8U/TivGUUkzp6SJW8lT0NvWz4uTzeFka6qguKvIKD4rt/1ZU8LBf6vD6dr7es/Ko7qWBLvIlVHDxwUUU6Jt4RvRJEijnRcSk88235PGvVCL3zbfm8DaZFO+7xvLs3qES8oznQO9XKNDxZLKK8IIMhvComWb0CAw48fDk2O+nbBb29C5e8ogVbu1EUBryYhdS7OTPgOul1AD25sgs7i1cBPBYmzLtSroA8hfyQvP3bErz9h/o82ZoJO7/ZhjxtT+A8UZ28uzaFk7wJ1nA6dd7FPGg5Kbwb9iC8psRrvBXyVjzGRuS8uAfNu0+smzvFAAK96FN4vC2fhzy65oC7tgXou/9mLjxMELw8GSgxPRBlVjxDxCq80j8ZveinkDxHgzu70j8ZvPGNnDyPn0i8Vn9+urXR8ju10fI7sRJiPDBemLt8OTa8tJ39O4ne0rsaXKa7t0ohPHQhGrdYXjI824sqvDw1RT2/2YY8E/BxPIUOfjv9dQ08PM8/PMwYHrwwXpi7nqxPPM8aA7w+wOC7ROdbO79iPTxVbRE8U45dPOOjRjxwYok8ME1Uu1SfIbyifKQ8UXqLPI85wzsITTq8R+lAPMRVQzzcv58892B/Oqg9mjw3MXu7P9EkvM6AiLyx7zA8eHolPLYWLLugFLq8AJsjvEOzZjk6RKQ8uRgRPXVVjzw0HSk9PWk6PLss47spzzK93rBAvJpTxDun+OC7OTPgvEa1yzvAH+k5fZDcOid4jLuN0di8N7kIPPe0F7wVaSC8zxoDvJVgvrvUpwO9dd7FPKUHQLxn4oI7Ng7KPIydYzzZRvE8LTkCu3bvCTy10fK7QAWaPGHeOLu6+O27omvgO8Rmh7xrXj87AzeDvORg8jnGRuS8UEYWPLPg0TvYZpQ9FJuwPLC7O7xug1U8bvoevAnW8DvxFtM8kEoHPDxYdrzcWZq8n3q/O94nCjvZI0C82yUlvayWpbyHh6y7ME1UO9b+KTzbFGG89oCiPFpgFzzhTKA84gnMPKgsVjyia+C7XNpuPHxc5zyDLqG8ukyGvKqUQLwG5U88wB/pO+B+ML2O4py8MOdOPHt8irsDnYg6rv6PumJ4szzuV0I80qWePKTkDj14A9y8fqEgu9DXLjykbUU7yEhJvLYFaLyfVw68", + } + ], + "model": "text-embedding-ada-002-v2", + "usage": {"prompt_tokens": 6, "total_tokens": 6}, + }, + ], +} RESPONSES = { "Invalid API key.": ( {"Content-Type": "application/json; charset=utf-8", "x-request-id": "4f8f61a7d0401e42a6760ea2ca2049f6"}, @@ -166,60 +237,91 @@ } -def simple_get(self): - content_len = int(self.headers.get("content-length")) - content = json.loads(self.rfile.read(content_len).decode("utf-8")) +@pytest.fixture(scope="session") +def simple_get(openai_version, extract_shortened_prompt): + def _simple_get(self): + content_len = int(self.headers.get("content-length")) + content = json.loads(self.rfile.read(content_len).decode("utf-8")) - prompt = extract_shortened_prompt(content) - if not prompt: - self.send_response(500) - self.end_headers() - self.wfile.write("Could not parse prompt.".encode("utf-8")) - return + prompt = extract_shortened_prompt(content) + if not prompt: + self.send_response(500) + self.end_headers() + self.wfile.write("Could not parse prompt.".encode("utf-8")) + return + + headers, response = ({}, "") + + if openai_version < (1, 0): + mocked_responses = RESPONSES + else: + mocked_responses = RESPONSES_V1 + + for k, v in mocked_responses.items(): + if prompt.startswith(k): + headers, status_code, response = v + break + else: # If no matches found + self.send_response(500) + self.end_headers() + self.wfile.write(("Unknown Prompt:\n%s" % prompt).encode("utf-8")) + return + + # Send response code + self.send_response(status_code) - headers, response = ({}, "") - for k, v in RESPONSES.items(): - if prompt.startswith(k): - headers, status_code, response = v - break - else: # If no matches found - self.send_response(500) + # Send headers + for k, v in headers.items(): + self.send_header(k, v) self.end_headers() - self.wfile.write(("Unknown Prompt:\n%s" % prompt).encode("utf-8")) + + # Send response body + self.wfile.write(json.dumps(response).encode("utf-8")) return - # Send response code - self.send_response(status_code) + return _simple_get + + +@pytest.fixture(scope="session") +def MockExternalOpenAIServer(simple_get): + class _MockExternalOpenAIServer(MockExternalHTTPServer): + # To use this class in a test one needs to start and stop this server + # before and after making requests to the test app that makes the external + # calls. + + def __init__(self, handler=simple_get, port=None, *args, **kwargs): + super(_MockExternalOpenAIServer, self).__init__(handler=handler, port=port, *args, **kwargs) + + return _MockExternalOpenAIServer + - # Send headers - for k, v in headers.items(): - self.send_header(k, v) - self.end_headers() +@pytest.fixture(scope="session") +def extract_shortened_prompt(openai_version): + def _extract_shortened_prompt(content): + if openai_version < (1, 0): + prompt = content.get("prompt", None) or content.get("input", None) or content.get("messages")[0]["content"] + else: + prompt = content.get("input", None) or content.get("messages")[0]["content"] + return prompt - # Send response body - self.wfile.write(json.dumps(response).encode("utf-8")) - return + return _extract_shortened_prompt -def extract_shortened_prompt(content): - prompt = ( - content.get("prompt", None) - or content.get("input", None) - or "\n".join(m["content"] for m in content.get("messages")) - ) - return prompt.lstrip().split("\n")[0] +def get_openai_version(): + # Import OpenAI so that get package version can catpure the version from the + # system module. OpenAI does not have a package version in v0. + import openai # noqa: F401; pylint: disable=W0611 + return get_package_version_tuple("openai") -class MockExternalOpenAIServer(MockExternalHTTPServer): - # To use this class in a test one needs to start and stop this server - # before and after making requests to the test app that makes the external - # calls. - def __init__(self, handler=simple_get, port=None, *args, **kwargs): - super(MockExternalOpenAIServer, self).__init__(handler=handler, port=port, *args, **kwargs) +@pytest.fixture(scope="session") +def openai_version(): + return get_openai_version() if __name__ == "__main__": + _MockExternalOpenAIServer = MockExternalOpenAIServer() with MockExternalOpenAIServer() as server: print("MockExternalOpenAIServer serving on port %s" % str(server.port)) while True: diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index 15518aa1a7..403a76f46a 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -16,9 +16,12 @@ import os import pytest -from _mock_external_openai_server import ( +from _mock_external_openai_server import ( # noqa: F401; pylint: disable=W0611 MockExternalOpenAIServer, extract_shortened_prompt, + get_openai_version, + openai_version, + simple_get, ) from testing_support.fixture.event_loop import ( # noqa: F401; pylint: disable=W0611 event_loop as loop, @@ -46,8 +49,81 @@ linked_applications=["Python Agent Test (mlmodel_openai)"], ) +if get_openai_version() < (1, 0): + collect_ignore = [ + "test_chat_completion_v1.py", + "test_embeddings_v1.py", + ] +else: + collect_ignore = [ + "test_embeddings.py", + "test_embeddings_error.py", + "test_chat_completion.py", + "test_get_llm_message_ids.py", + "test_chat_completion_error.py", + ] + + OPENAI_AUDIT_LOG_FILE = os.path.join(os.path.realpath(os.path.dirname(__file__)), "openai_audit.log") OPENAI_AUDIT_LOG_CONTENTS = {} +# Intercept outgoing requests and log to file for mocking +RECORDED_HEADERS = set(["x-request-id", "content-type"]) + + +@pytest.fixture(scope="session") +def openai_clients(openai_version, MockExternalOpenAIServer): # noqa: F811 + """ + This configures the openai client and returns it for openai v1 and only configures + openai for v0 since there is no client. + """ + import openai + + from newrelic.core.config import _environ_as_bool + + if not _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): + with MockExternalOpenAIServer() as server: + if openai_version < (1, 0): + openai.api_base = "http://localhost:%d" % server.port + openai.api_key = "NOT-A-REAL-SECRET" + yield + else: + openai_sync = openai.OpenAI( + base_url="http://localhost:%d" % server.port, + api_key="NOT-A-REAL-SECRET", + ) + openai_async = openai.AsyncOpenAI( + base_url="http://localhost:%d" % server.port, + api_key="NOT-A-REAL-SECRET", + ) + yield (openai_sync, openai_async) + else: + openai_api_key = os.environ.get("OPENAI_API_KEY") + if not openai_api_key: + raise RuntimeError("OPENAI_API_KEY environment variable required.") + + if openai_version < (1, 0): + openai.api_key = openai_api_key + yield + else: + openai_sync = openai.OpenAI( + api_key=openai_api_key, + ) + openai_async = openai.AsyncOpenAI( + api_key=openai_api_key, + ) + yield (openai_sync, openai_async) + + +@pytest.fixture(scope="session") +def sync_openai_client(openai_clients): + sync_client, _ = openai_clients + return sync_client + + +@pytest.fixture(scope="session") +def async_openai_client(openai_clients): + _, async_client = openai_clients + return async_client @pytest.fixture @@ -62,87 +138,93 @@ def set_info(): @pytest.fixture(autouse=True, scope="session") -def openai_server(): +def openai_server( + openai_version, # noqa: F811 + openai_clients, + wrap_openai_base_client_process_response, + wrap_openai_api_requestor_request, + wrap_openai_api_requestor_interpret_response, +): """ This fixture will either create a mocked backend for testing purposes, or will set up an audit log file to log responses of the real OpenAI backend to a file. The behavior can be controlled by setting NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES=1 as an environment variable to run using the real OpenAI backend. (Default: mocking) """ - import openai - from newrelic.core.config import _environ_as_bool - if not _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): - # Use mocked OpenAI backend and prerecorded responses - with MockExternalOpenAIServer() as server: - openai.api_base = "http://localhost:%d" % server.port - openai.api_key = "NOT-A-REAL-SECRET" - yield - else: - # Use real OpenAI backend and record responses - openai.api_key = os.environ.get("OPENAI_API_KEY", "") - if not openai.api_key: - raise RuntimeError("OPENAI_API_KEY environment variable required.") - - # Apply function wrappers to record data - wrap_function_wrapper("openai.api_requestor", "APIRequestor.request", wrap_openai_api_requestor_request) - wrap_function_wrapper( - "openai.api_requestor", "APIRequestor._interpret_response", wrap_openai_api_requestor_interpret_response - ) - yield # Run tests - + if _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): + if openai_version < (1, 0): + # Apply function wrappers to record data + wrap_function_wrapper("openai.api_requestor", "APIRequestor.request", wrap_openai_api_requestor_request) + wrap_function_wrapper( + "openai.api_requestor", "APIRequestor._interpret_response", wrap_openai_api_requestor_interpret_response + ) + yield # Run tests + else: + # Apply function wrappers to record data + wrap_function_wrapper( + "openai._base_client", "BaseClient._process_response", wrap_openai_base_client_process_response + ) + yield # Run tests # Write responses to audit log with open(OPENAI_AUDIT_LOG_FILE, "w") as audit_log_fp: json.dump(OPENAI_AUDIT_LOG_CONTENTS, fp=audit_log_fp, indent=4) + else: + # We are mocking openai responses so we don't need to do anything in this case. + yield -# Intercept outgoing requests and log to file for mocking -RECORDED_HEADERS = set(["x-request-id", "content-type"]) - - -def wrap_openai_api_requestor_interpret_response(wrapped, instance, args, kwargs): - rbody, rcode, rheaders = bind_request_interpret_response_params(*args, **kwargs) - headers = dict( - filter( - lambda k: k[0].lower() in RECORDED_HEADERS - or k[0].lower().startswith("openai") - or k[0].lower().startswith("x-ratelimit"), - rheaders.items(), +@pytest.fixture(scope="session") +def wrap_openai_api_requestor_interpret_response(): + def _wrap_openai_api_requestor_interpret_response(wrapped, instance, args, kwargs): + rbody, rcode, rheaders = bind_request_interpret_response_params(*args, **kwargs) + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + rheaders.items(), + ) ) - ) - if rcode >= 400 or rcode < 200: - rbody = json.loads(rbody) - OPENAI_AUDIT_LOG_CONTENTS["error"] = headers, rcode, rbody # Append response data to audit log - return wrapped(*args, **kwargs) + if rcode >= 400 or rcode < 200: + rbody = json.loads(rbody) + OPENAI_AUDIT_LOG_CONTENTS["error"] = headers, rcode, rbody # Append response data to audit log + return wrapped(*args, **kwargs) + return _wrap_openai_api_requestor_interpret_response -def wrap_openai_api_requestor_request(wrapped, instance, args, kwargs): - params = bind_request_params(*args, **kwargs) - if not params: - return wrapped(*args, **kwargs) - prompt = extract_shortened_prompt(params) +@pytest.fixture(scope="session") +def wrap_openai_api_requestor_request(extract_shortened_prompt): # noqa: F811 + def _wrap_openai_api_requestor_request(wrapped, instance, args, kwargs): + params = bind_request_params(*args, **kwargs) + if not params: + return wrapped(*args, **kwargs) + + prompt = extract_shortened_prompt(params) - # Send request - result = wrapped(*args, **kwargs) + # Send request + result = wrapped(*args, **kwargs) - # Clean up data - data = result[0].data - headers = result[0]._headers - headers = dict( - filter( - lambda k: k[0].lower() in RECORDED_HEADERS - or k[0].lower().startswith("openai") - or k[0].lower().startswith("x-ratelimit"), - headers.items(), + # Clean up data + data = result[0].data + headers = result[0]._headers + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + headers.items(), + ) ) - ) - # Log response - OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, result.http_status, data # Append response data to audit log - return result + # Log response + OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, 200, data # Append response data to audit log + return result + + return _wrap_openai_api_requestor_request def bind_request_params(method, url, params=None, *args, **kwargs): @@ -151,3 +233,39 @@ def bind_request_params(method, url, params=None, *args, **kwargs): def bind_request_interpret_response_params(result, stream): return result.content.decode("utf-8"), result.status_code, result.headers + + +def bind_base_client_process_response( + cast_to, + options, + response, + stream, + stream_cls, +): + return options, response + + +@pytest.fixture(scope="session") +def wrap_openai_base_client_process_response(extract_shortened_prompt): # noqa: F811 + def _wrap_openai_base_client_process_response(wrapped, instance, args, kwargs): + options, response = bind_base_client_process_response(*args, **kwargs) + if not options: + return wrapped(*args, **kwargs) + + data = getattr(options, "json_data", {}) + prompt = extract_shortened_prompt(data) + rheaders = getattr(response, "headers") + + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + rheaders.items(), + ) + ) + body = json.loads(response.content.decode("utf-8")) + OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, response.status_code, body # Append response data to audit log + return wrapped(*args, **kwargs) + + return _wrap_openai_base_client_process_response diff --git a/tests/mlmodel_openai/test_chat_completion_v1.py b/tests/mlmodel_openai/test_chat_completion_v1.py new file mode 100644 index 0000000000..ee7b714893 --- /dev/null +++ b/tests/mlmodel_openai/test_chat_completion_v1.py @@ -0,0 +1,36 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + override_application_settings, + reset_core_stats_engine, +) + +from newrelic.api.background_task import background_task +from newrelic.api.transaction import add_custom_attribute + +_test_openai_chat_completion_messages = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) + + +@reset_core_stats_engine() +@background_task() +def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info, sync_openai_client): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) diff --git a/tests/mlmodel_openai/test_embeddings_v1.py b/tests/mlmodel_openai/test_embeddings_v1.py new file mode 100644 index 0000000000..55b9f3596b --- /dev/null +++ b/tests/mlmodel_openai/test_embeddings_v1.py @@ -0,0 +1,26 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + override_application_settings, + reset_core_stats_engine, +) + +from newrelic.api.background_task import background_task + + +@reset_core_stats_engine() +@background_task() +def test_openai_embedding_sync(set_trace_info, sync_openai_client): + set_trace_info() + sync_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") diff --git a/tox.ini b/tox.ini index 0197fa5170..53020ce3ab 100644 --- a/tox.ini +++ b/tox.ini @@ -139,7 +139,8 @@ envlist = python-framework_starlette-{py37,py38}-starlette{002001}, python-framework_starlette-{py37,py38,py39,py310,py311,pypy38}-starlettelatest, python-framework_strawberry-{py37,py38,py39,py310,py311}-strawberrylatest, - python-mlmodel_openai-{py37,py38,py39,py310,py311,pypy38}, + python-mlmodel_openai-openai0-{py37,py38,py39,py310,py311,pypy38}, + python-mlmodel_openai-openai1-{py37,py38,py39,py310,py311,pypy38}, python-logger_logging-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}, python-logger_loguru-{py37,py38,py39,py310,py311,pypy38}-logurulatest, python-logger_loguru-py39-loguru{06,05}, @@ -341,7 +342,8 @@ deps = framework_tornado: pycurl framework_tornado-tornadolatest: tornado framework_tornado-tornadomaster: https://github.com/tornadoweb/tornado/archive/master.zip - mlmodel_openai: openai[datalib]<1.0 + mlmodel_openai-openai0: openai[datalib]<1.0 + mlmodel_openai-openai1: openai[datalib]<2.0 mlmodel_openai: protobuf logger_loguru-logurulatest: loguru logger_loguru-loguru06: loguru<0.7 From 140c7bcfbfcfd1985f4e30d132db3c6aa0a5db69 Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Tue, 12 Dec 2023 17:17:02 -0800 Subject: [PATCH 013/199] Add support for OpenAI v1 embeddings (#1002) * Add embeddings OpenAI v1 support. * Fix errors tests. * Add embeddings OpenAI v1 support. * Fix errors tests. * Add updated tests for compatiblity with new mock server. * Update tox. * Restore chat completion error tests. * Address review comments. * Store converted response object in new var for v1. --- newrelic/config.py | 6 +- newrelic/hooks/mlmodel_openai.py | 72 +++++++--- tests/mlmodel_openai/conftest.py | 2 + .../test_embeddings_error_v1.py | 28 ++++ tests/mlmodel_openai/test_embeddings_v1.py | 126 +++++++++++++++++- 5 files changed, 215 insertions(+), 19 deletions(-) create mode 100644 tests/mlmodel_openai/test_embeddings_error_v1.py diff --git a/newrelic/config.py b/newrelic/config.py index f9f2fedcb7..dad3cf4ebe 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -2048,6 +2048,11 @@ def _process_module_builtin_defaults(): "newrelic.hooks.mlmodel_openai", "instrument_openai_api_resources_chat_completion", ) + _process_module_definition( + "openai.resources.embeddings", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_resources_embeddings", + ) _process_module_definition( "openai.util", "newrelic.hooks.mlmodel_openai", @@ -2058,7 +2063,6 @@ def _process_module_builtin_defaults(): "newrelic.hooks.mlmodel_openai", "instrument_openai_base_client", ) - _process_module_definition( "asyncio.base_events", "newrelic.hooks.coroutines_asyncio", diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 5b3857d0ee..38e08e3541 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -23,10 +23,13 @@ from newrelic.common.package_version_utils import get_package_version from newrelic.core.config import global_settings + OPENAI_VERSION = get_package_version("openai") +OPENAI_VERSION_TUPLE = tuple(map(int, OPENAI_VERSION.split("."))) +OPENAI_V1 = OPENAI_VERSION_TUPLE >= (1,) -def wrap_embedding_create(wrapped, instance, args, kwargs): +def wrap_embedding_sync(wrapped, instance, args, kwargs): transaction = current_transaction() if not transaction or kwargs.get("stream", False): return wrapped(*args, **kwargs) @@ -38,7 +41,7 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): embedding_id = str(uuid.uuid4()) # Get API key without using the response so we can store it before the response is returned in case of errors - api_key = getattr(openai, "api_key", None) + api_key = getattr(instance._client, "api_key", "") if OPENAI_V1 else getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" span_id = None @@ -93,25 +96,37 @@ def wrap_embedding_create(wrapped, instance, args, kwargs): if not response: return response - response_model = response.get("model", "") - response_usage = response.get("usage", {}) response_headers = getattr(response, "_nr_response_headers", None) + + # In v1, response objects are pydantic models so this function call converts the object back to a dictionary for backwards compatibility + # Use standard response object returned from create call for v0 + if OPENAI_V1: + attribute_response = response.model_dump() + else: + attribute_response = response + + request_id = response_headers.get("x-request-id", "") if response_headers else "" + response_model = attribute_response.get("model", "") + response_usage = attribute_response.get("usage", {}) + api_type = getattr(attribute_response, "api_type", "") + organization = response_headers.get("openai-organization", "") if OPENAI_V1 else attribute_response.organization + full_embedding_response_dict = { "id": embedding_id, "appName": settings.app_name, - "api_key_last_four_digits": api_key_last_four_digits, "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, "input": kwargs.get("input", ""), + "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", "request.model": kwargs.get("model") or kwargs.get("engine") or "", "request_id": request_id, "duration": ft.duration, "response.model": response_model, - "response.organization": response.organization, - "response.api_type": response.api_type, + "response.organization": organization, + "response.api_type": api_type, # API type was removed in v1 "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", "response.headers.llmVersion": response_headers.get("openai-version", ""), @@ -417,7 +432,7 @@ def create_chat_completion_message_event( return (conversation_id, request_id, message_ids) -async def wrap_embedding_acreate(wrapped, instance, args, kwargs): +async def wrap_embedding_async(wrapped, instance, args, kwargs): transaction = current_transaction() if not transaction or kwargs.get("stream", False): return await wrapped(*args, **kwargs) @@ -429,7 +444,7 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): embedding_id = str(uuid.uuid4()) # Get API key without using the response so we can store it before the response is returned in case of errors - api_key = getattr(openai, "api_key", None) + api_key = getattr(instance._client, "api_key", "") if OPENAI_V1 else getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" span_id = None @@ -484,25 +499,36 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): if not response: return response - response_model = response.get("model", "") - response_usage = response.get("usage", {}) response_headers = getattr(response, "_nr_response_headers", None) + + # In v1, response objects are pydantic models so this function call converts the object back to a dictionary for backwards compatibility + # Use standard response object returned from create call for v0 + if OPENAI_V1: + attribute_response = response.model_dump() + else: + attribute_response = response + request_id = response_headers.get("x-request-id", "") if response_headers else "" + response_model = attribute_response.get("model", "") + response_usage = attribute_response.get("usage", {}) + api_type = getattr(attribute_response, "api_type", "") + organization = response_headers.get("openai-organization", "") if OPENAI_V1 else attribute_response.organization + full_embedding_response_dict = { "id": embedding_id, "appName": settings.app_name, - "api_key_last_four_digits": api_key_last_four_digits, "span_id": span_id, "trace_id": trace_id, "transaction_id": transaction.guid, "input": kwargs.get("input", ""), + "api_key_last_four_digits": f"sk-{api_key[-4:]}" if api_key else "", "request.model": kwargs.get("model") or kwargs.get("engine") or "", "request_id": request_id, "duration": ft.duration, "response.model": response_model, - "response.organization": response.organization, - "response.api_type": response.api_type, + "response.organization": organization, + "response.api_type": api_type, # API type was removed in v1 "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", "response.headers.llmVersion": response_headers.get("openai-version", ""), @@ -533,6 +559,7 @@ async def wrap_embedding_acreate(wrapped, instance, args, kwargs): return response + async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): transaction = current_transaction() @@ -730,7 +757,7 @@ def wrap_base_client_process_response(wrapped, instance, args, kwargs): nr_response_headers = getattr(response, "headers") return_val = wrapped(*args, **kwargs) - + # Obtain reponse headers for v1 return_val._nr_response_headers = nr_response_headers return return_val @@ -741,9 +768,9 @@ def instrument_openai_util(module): def instrument_openai_api_resources_embedding(module): if hasattr(module.Embedding, "create"): - wrap_function_wrapper(module, "Embedding.create", wrap_embedding_create) + wrap_function_wrapper(module, "Embedding.create", wrap_embedding_sync) if hasattr(module.Embedding, "acreate"): - wrap_function_wrapper(module, "Embedding.acreate", wrap_embedding_acreate) + wrap_function_wrapper(module, "Embedding.acreate", wrap_embedding_async) def instrument_openai_api_resources_chat_completion(module): @@ -753,6 +780,17 @@ def instrument_openai_api_resources_chat_completion(module): wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_acreate) +# OpenAI v1 instrumentation points +def instrument_openai_resources_embeddings(module): + if hasattr(module, "Embeddings"): + if hasattr(module.Embeddings, "create"): + wrap_function_wrapper(module, "Embeddings.create", wrap_embedding_sync) + + if hasattr(module, "AsyncEmbeddings"): + if hasattr(module.Embeddings, "create"): + wrap_function_wrapper(module, "AsyncEmbeddings.create", wrap_embedding_async) + + def instrument_openai_base_client(module): if hasattr(module.BaseClient, "_process_response"): wrap_function_wrapper(module, "BaseClient._process_response", wrap_base_client_process_response) diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index 403a76f46a..57ecddf392 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -53,6 +53,8 @@ collect_ignore = [ "test_chat_completion_v1.py", "test_embeddings_v1.py", + "test_chat_completion_error_v1.py", + "test_embeddings_error_v1.py", ] else: collect_ignore = [ diff --git a/tests/mlmodel_openai/test_embeddings_error_v1.py b/tests/mlmodel_openai/test_embeddings_error_v1.py new file mode 100644 index 0000000000..485723f041 --- /dev/null +++ b/tests/mlmodel_openai/test_embeddings_error_v1.py @@ -0,0 +1,28 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +import pytest +from newrelic.api.background_task import background_task + + +# Sync tests: +@background_task() +def test_embeddings_invalid_request_error_invalid_model(set_trace_info, sync_openai_client): + with pytest.raises(openai.InternalServerError): + set_trace_info() + sync_openai_client.embeddings.create(input="Model does not exist.", model="does-not-exist") + + + diff --git a/tests/mlmodel_openai/test_embeddings_v1.py b/tests/mlmodel_openai/test_embeddings_v1.py index 55b9f3596b..9bf91967a7 100644 --- a/tests/mlmodel_openai/test_embeddings_v1.py +++ b/tests/mlmodel_openai/test_embeddings_v1.py @@ -11,16 +11,140 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + +import openai +from testing_support.fixtures import ( # override_application_settings, override_application_settings, reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, ) from newrelic.api.background_task import background_task +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} + +embedding_recorded_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "This is an embedding test.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "response.model": "text-embedding-ada-002-v2", + "request.model": "text-embedding-ada-002", + "request_id": "fef7adee5adcfb03c083961bdce4f6a4", + "response.organization": "foobar-jtbczk", + "response.usage.total_tokens": 6, + "response.usage.prompt_tokens": 6, + "response.api_type": "", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 150000, + "response.headers.ratelimitResetTokens": "2ms", + "response.headers.ratelimitResetRequests": "19m5.228s", + "response.headers.ratelimitRemainingTokens": 149993, + "response.headers.ratelimitRemainingRequests": 197, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + @reset_core_stats_engine() +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) +@validate_transaction_metrics( + name="test_embeddings_v1:test_openai_embedding_sync", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @background_task() def test_openai_embedding_sync(set_trace_info, sync_openai_client): set_trace_info() sync_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +def test_openai_embedding_sync_outside_txn(sync_openai_client): + sync_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@override_application_settings(disabled_custom_insights_settings) +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +@validate_transaction_metrics( + name="test_embeddings_v1:test_openai_embedding_sync_disabled_settings", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_sync_disabled_settings(set_trace_info, sync_openai_client): + set_trace_info() + sync_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + + +@reset_core_stats_engine() +@validate_custom_events(embedding_recorded_events) +@validate_custom_event_count(count=1) +@validate_transaction_metrics( + name="test_embeddings_v1:test_openai_embedding_async", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_async(loop, set_trace_info, async_openai_client): + set_trace_info() + + loop.run_until_complete( + async_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + ) + + +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +def test_openai_embedding_async_outside_transaction(loop, async_openai_client): + loop.run_until_complete( + async_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + ) + + +@override_application_settings(disabled_custom_insights_settings) +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +@validate_transaction_metrics( + name="test_embeddings_v1:test_openai_embedding_async_disabled_custom_insights_events", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_embedding_async_disabled_custom_insights_events(loop, async_openai_client): + loop.run_until_complete( + async_openai_client.embeddings.create(input="This is an embedding test.", model="text-embedding-ada-002") + ) From 7b98c510c486f95c4dc56e21d6002ac1e62fd9fd Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Thu, 14 Dec 2023 18:29:43 -0800 Subject: [PATCH 014/199] Add support for openai v1 completions (#1006) * Add chat completion & header instrumentation * Add message id tests & fix bug * Add error tracing tests & impl for v1 * Use latest to test instead of <1.0 * Ignore v1 embedding error tests * Capture _usage_based headers in v0 * Verify all error attrs are asserted * Use body instead of content * Handle body being None --- newrelic/config.py | 5 + newrelic/hooks/mlmodel_openai.py | 170 +++++-- .../_mock_external_openai_server.py | 66 ++- tests/mlmodel_openai/conftest.py | 82 ++-- tests/mlmodel_openai/test_chat_completion.py | 6 + .../test_chat_completion_error_v1.py | 416 ++++++++++++++++++ .../mlmodel_openai/test_chat_completion_v1.py | 339 +++++++++++++- .../test_get_llm_message_ids_v1.py | 234 ++++++++++ tox.ini | 6 +- 9 files changed, 1232 insertions(+), 92 deletions(-) create mode 100644 tests/mlmodel_openai/test_chat_completion_error_v1.py create mode 100644 tests/mlmodel_openai/test_get_llm_message_ids_v1.py diff --git a/newrelic/config.py b/newrelic/config.py index dad3cf4ebe..3c6b45b034 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -2058,6 +2058,11 @@ def _process_module_builtin_defaults(): "newrelic.hooks.mlmodel_openai", "instrument_openai_util", ) + _process_module_definition( + "openai.resources.chat.completions", + "newrelic.hooks.mlmodel_openai", + "instrument_openai_resources_chat_completions", + ) _process_module_definition( "openai._base_client", "newrelic.hooks.mlmodel_openai", diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 38e08e3541..a653b7ca69 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -23,7 +23,6 @@ from newrelic.common.package_version_utils import get_package_version from newrelic.core.config import global_settings - OPENAI_VERSION = get_package_version("openai") OPENAI_VERSION_TUPLE = tuple(map(int, OPENAI_VERSION.split("."))) OPENAI_V1 = OPENAI_VERSION_TUPLE >= (1,) @@ -105,7 +104,6 @@ def wrap_embedding_sync(wrapped, instance, args, kwargs): else: attribute_response = response - request_id = response_headers.get("x-request-id", "") if response_headers else "" response_model = attribute_response.get("model", "") @@ -126,7 +124,7 @@ def wrap_embedding_sync(wrapped, instance, args, kwargs): "duration": ft.duration, "response.model": response_model, "response.organization": organization, - "response.api_type": api_type, # API type was removed in v1 + "response.api_type": api_type, # API type was removed in v1 "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", "response.headers.llmVersion": response_headers.get("openai-version", ""), @@ -157,7 +155,7 @@ def wrap_embedding_sync(wrapped, instance, args, kwargs): return response -def wrap_chat_completion_create(wrapped, instance, args, kwargs): +def wrap_chat_completion_sync(wrapped, instance, args, kwargs): transaction = current_transaction() if not transaction or kwargs.get("stream", False): @@ -169,7 +167,7 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): request_message_list = kwargs.get("messages", []) # Get API key without using the response so we can store it before the response is returned in case of errors - api_key = getattr(openai, "api_key", None) + api_key = getattr(instance._client, "api_key", None) if OPENAI_V1 else getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" span_id = None @@ -192,18 +190,37 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): trace_id = available_metadata.get("trace.id", "") try: - response = wrapped(*args, **kwargs) + return_val = wrapped(*args, **kwargs) except Exception as exc: - exc_organization = getattr(exc, "organization", "") + if OPENAI_V1: + response = getattr(exc, "response", "") + response_headers = getattr(response, "headers", "") + exc_organization = response_headers.get("openai-organization", "") if response_headers else "" + # There appears to be a bug here in openai v1 where despite having code, + # param, etc in the error response, they are not populated on the exception + # object so grab them from the response body object instead. + body = getattr(exc, "body", {}) or {} + notice_error_attributes = { + "http.statusCode": getattr(exc, "status_code", "") or "", + "error.message": body.get("message", "") or "", + "error.code": body.get("code", "") or "", + "error.param": body.get("param", "") or "", + "completion_id": completion_id, + } + else: + exc_organization = getattr(exc, "organization", "") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "completion_id": completion_id, + } + # Override the default message if it is not empty. + message = notice_error_attributes.pop("error.message") + if message: + exc._nr_message = message - notice_error_attributes = { - "http.statusCode": getattr(exc, "http_status", ""), - "error.message": getattr(exc, "_message", ""), - "error.code": getattr(getattr(exc, "error", ""), "code", ""), - "error.param": getattr(exc, "param", ""), - "completion_id": completion_id, - } - exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( attributes=notice_error_attributes, ) @@ -244,11 +261,17 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): raise - if not response: - return response + if not return_val: + return return_val # At this point, we have a response so we can grab attributes only available on the response object - response_headers = getattr(response, "_nr_response_headers", None) + response_headers = getattr(return_val, "_nr_response_headers", None) + # In v1, response objects are pydantic models so this function call converts the + # object back to a dictionary for backwards compatibility. + response = return_val + if OPENAI_V1: + response = response.model_dump() + response_model = response.get("model", "") response_id = response.get("id") request_id = response_headers.get("x-request-id", "") if response_headers else "" @@ -257,6 +280,9 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): messages = kwargs.get("messages", []) choices = response.get("choices", []) + organization = ( + response_headers.get("openai-organization", "") if OPENAI_V1 else getattr(response, "organization", "") + ) full_chat_completion_summary_dict = { "id": completion_id, @@ -274,11 +300,11 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): "request_id": request_id, "duration": ft.duration, "response.model": response_model, - "response.organization": getattr(response, "organization", ""), + "response.organization": organization, "response.usage.completion_tokens": response_usage.get("completion_tokens", "") if any(response_usage) else "", "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", - "response.choices.finish_reason": choices[0].finish_reason if choices else "", + "response.choices.finish_reason": choices[0].get("finish_reason", "") if choices else "", "response.api_type": getattr(response, "api_type", ""), "response.headers.llmVersion": response_headers.get("openai-version", ""), "response.headers.ratelimitLimitRequests": check_rate_limit_header( @@ -299,13 +325,22 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): "response.headers.ratelimitRemainingRequests": check_rate_limit_header( response_headers, "x-ratelimit-remaining-requests", True ), + "response.headers.ratelimitLimitTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens_usage_based", True + ), + "response.headers.ratelimitResetTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens_usage_based", False + ), + "response.headers.ratelimitRemainingTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens_usage_based", True + ), "response.number_of_messages": len(messages) + len(choices), } transaction.record_custom_event("LlmChatCompletionSummary", full_chat_completion_summary_dict) input_message_list = list(messages) - output_message_list = [choices[0].message] if choices else None + output_message_list = [choices[0].get("message", "")] if choices else None message_ids = create_chat_completion_message_event( transaction, @@ -326,7 +361,7 @@ def wrap_chat_completion_create(wrapped, instance, args, kwargs): transaction._nr_message_ids = {} transaction._nr_message_ids[response_id] = message_ids - return response + return return_val def check_rate_limit_header(response_headers, header_name, is_int): @@ -395,7 +430,7 @@ def create_chat_completion_message_event( if output_message_list: # Loop through all output messages received from the LLM response and emit a custom event for each one for index, message in enumerate(output_message_list): - message_content = getattr(message, "content", "") + message_content = message.get("content", "") # Add offset of input_message_length so we don't receive any duplicate index values that match the input message IDs index += len(input_message_list) @@ -418,7 +453,7 @@ def create_chat_completion_message_event( "trace_id": trace_id, "transaction_id": transaction.guid, "content": message_content, - "role": getattr(message, "role", ""), + "role": message.get("role", ""), "completion_id": chat_completion_id, "sequence": index, "response.model": response_model if response_model else "", @@ -559,8 +594,7 @@ async def wrap_embedding_async(wrapped, instance, args, kwargs): return response - -async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): +async def wrap_chat_completion_async(wrapped, instance, args, kwargs): transaction = current_transaction() if not transaction or kwargs.get("stream", False): @@ -572,7 +606,7 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): request_message_list = kwargs.get("messages", []) # Get API key without using the response so we can store it before the response is returned in case of errors - api_key = getattr(openai, "api_key", None) + api_key = getattr(instance._client, "api_key", None) if OPENAI_V1 else getattr(openai, "api_key", None) api_key_last_four_digits = f"sk-{api_key[-4:]}" if api_key else "" span_id = None @@ -595,18 +629,37 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): trace_id = available_metadata.get("trace.id", "") try: - response = await wrapped(*args, **kwargs) + return_val = await wrapped(*args, **kwargs) except Exception as exc: - exc_organization = getattr(exc, "organization", "") + if OPENAI_V1: + response = getattr(exc, "response", "") + response_headers = getattr(response, "headers", "") + exc_organization = response_headers.get("openai-organization", "") if response_headers else "" + # There appears to be a bug here in openai v1 where despite having code, + # param, etc in the error response, they are not populated on the exception + # object so grab them from the response body object instead. + body = getattr(exc, "body", {}) or {} + notice_error_attributes = { + "http.statusCode": getattr(exc, "status_code", "") or "", + "error.message": body.get("message", "") or "", + "error.code": body.get("code", "") or "", + "error.param": body.get("param", "") or "", + "completion_id": completion_id, + } + else: + exc_organization = getattr(exc, "organization", "") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "completion_id": completion_id, + } + # Override the default message if it is not empty. + message = notice_error_attributes.pop("error.message") + if message: + exc._nr_message = message - notice_error_attributes = { - "http.statusCode": getattr(exc, "http_status", ""), - "error.message": getattr(exc, "_message", ""), - "error.code": getattr(getattr(exc, "error", ""), "code", ""), - "error.param": getattr(exc, "param", ""), - "completion_id": completion_id, - } - exc._nr_message = notice_error_attributes.pop("error.message") ft.notice_error( attributes=notice_error_attributes, ) @@ -647,11 +700,17 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): raise - if not response: - return response + if not return_val: + return return_val # At this point, we have a response so we can grab attributes only available on the response object - response_headers = getattr(response, "_nr_response_headers", None) + response_headers = getattr(return_val, "_nr_response_headers", None) + # In v1, response objects are pydantic models so this function call converts the + # object back to a dictionary for backwards compatibility. + response = return_val + if OPENAI_V1: + response = response.model_dump() + response_model = response.get("model", "") response_id = response.get("id") request_id = response_headers.get("x-request-id", "") if response_headers else "" @@ -660,6 +719,9 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): messages = kwargs.get("messages", []) choices = response.get("choices", []) + organization = ( + response_headers.get("openai-organization", "") if OPENAI_V1 else getattr(response, "organization", "") + ) full_chat_completion_summary_dict = { "id": completion_id, @@ -677,11 +739,11 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): "request_id": request_id, "duration": ft.duration, "response.model": response_model, - "response.organization": getattr(response, "organization", ""), + "response.organization": organization, "response.usage.completion_tokens": response_usage.get("completion_tokens", "") if any(response_usage) else "", "response.usage.total_tokens": response_usage.get("total_tokens", "") if any(response_usage) else "", "response.usage.prompt_tokens": response_usage.get("prompt_tokens", "") if any(response_usage) else "", - "response.choices.finish_reason": choices[0].finish_reason if choices else "", + "response.choices.finish_reason": choices[0].get("finish_reason", "") if choices else "", "response.api_type": getattr(response, "api_type", ""), "response.headers.llmVersion": response_headers.get("openai-version", ""), "response.headers.ratelimitLimitRequests": check_rate_limit_header( @@ -702,13 +764,22 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): "response.headers.ratelimitRemainingRequests": check_rate_limit_header( response_headers, "x-ratelimit-remaining-requests", True ), + "response.headers.ratelimitLimitTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-limit-tokens_usage_based", True + ), + "response.headers.ratelimitResetTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-reset-tokens_usage_based", False + ), + "response.headers.ratelimitRemainingTokensUsageBased": check_rate_limit_header( + response_headers, "x-ratelimit-remaining-tokens_usage_based", True + ), "response.number_of_messages": len(messages) + len(choices), } transaction.record_custom_event("LlmChatCompletionSummary", full_chat_completion_summary_dict) input_message_list = list(messages) - output_message_list = [choices[0].message] if choices else None + output_message_list = [choices[0].get("message", "")] if choices else None message_ids = create_chat_completion_message_event( transaction, @@ -729,7 +800,7 @@ async def wrap_chat_completion_acreate(wrapped, instance, args, kwargs): transaction._nr_message_ids = {} transaction._nr_message_ids[response_id] = message_ids - return response + return return_val def wrap_convert_to_openai_object(wrapped, instance, args, kwargs): @@ -775,9 +846,16 @@ def instrument_openai_api_resources_embedding(module): def instrument_openai_api_resources_chat_completion(module): if hasattr(module.ChatCompletion, "create"): - wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_create) + wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_sync) if hasattr(module.ChatCompletion, "acreate"): - wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_acreate) + wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_async) + + +def instrument_openai_resources_chat_completions(module): + if hasattr(module.Completions, "create"): + wrap_function_wrapper(module, "Completions.create", wrap_chat_completion_sync) + if hasattr(module.AsyncCompletions, "create"): + wrap_function_wrapper(module, "AsyncCompletions.create", wrap_chat_completion_async) # OpenAI v1 instrumentation points diff --git a/tests/mlmodel_openai/_mock_external_openai_server.py b/tests/mlmodel_openai/_mock_external_openai_server.py index 6cac9e2a68..edcfc47f35 100644 --- a/tests/mlmodel_openai/_mock_external_openai_server.py +++ b/tests/mlmodel_openai/_mock_external_openai_server.py @@ -35,7 +35,7 @@ { "content-type": "application/json", "openai-model": "gpt-3.5-turbo-0613", - "openai-organization": "foobar-jtbczk", + "openai-organization": "new-relic-nkmd8b", "openai-processing-ms": "6326", "openai-version": "2020-10-01", "x-ratelimit-limit-requests": "200", @@ -60,7 +60,45 @@ "index": 0, "message": { "role": "assistant", - "content": "To convert 212 degrees Fahrenheit to Celsius, you can use the formula:\n\n\u00b0C = (\u00b0F - 32) x 5/9\n\nSubstituting the value, we get:\n\n\u00b0C = (212 - 32) x 5/9\n = 180 x 5/9\n = 100\n\nTherefore, 212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", + }, + "finish_reason": "stop", + } + ], + "usage": {"prompt_tokens": 26, "completion_tokens": 82, "total_tokens": 108}, + "system_fingerprint": None, + }, + ], + "You are a mathematician.": [ + { + "content-type": "application/json", + "openai-model": "gpt-3.5-turbo-0613", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "6326", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "200", + "x-ratelimit-limit-tokens": "40000", + "x-ratelimit-limit-tokens_usage_based": "40000", + "x-ratelimit-remaining-requests": "198", + "x-ratelimit-remaining-tokens": "39880", + "x-ratelimit-remaining-tokens_usage_based": "39880", + "x-ratelimit-reset-requests": "11m32.334s", + "x-ratelimit-reset-tokens": "180ms", + "x-ratelimit-reset-tokens_usage_based": "180ms", + "x-request-id": "f8d0f53b6881c5c0a3698e55f8f410cd", + }, + 200, + { + "id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat", + "object": "chat.completion", + "created": 1701995833, + "model": "gpt-3.5-turbo-0613", + "choices": [ + { + "index": 0, + "message": { + "role": "assistant", + "content": "1 plus 2 is 3.", }, "finish_reason": "stop", } @@ -69,6 +107,30 @@ "system_fingerprint": None, }, ], + "Invalid API key.": [ + {"content-type": "application/json; charset=utf-8", "x-request-id": "a51821b9fd83d8e0e04542bedc174310"}, + 401, + { + "error": { + "message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + "type": "invalid_request_error", + "param": None, + "code": "invalid_api_key", + } + }, + ], + "Model does not exist.": [ + {"content-type": "application/json; charset=utf-8", "x-request-id": "3b0f8e510ee8a67c08a227a98eadbbe6"}, + 404, + { + "error": { + "message": "The model `does-not-exist` does not exist", + "type": "invalid_request_error", + "param": None, + "code": "model_not_found", + } + }, + ], "This is an embedding test.": [ { "content-type": "application/json", diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index 57ecddf392..6c0fed0e44 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -52,7 +52,9 @@ if get_openai_version() < (1, 0): collect_ignore = [ "test_chat_completion_v1.py", + "test_chat_completion_error_v1.py", "test_embeddings_v1.py", + "test_get_llm_message_ids_v1.py", "test_chat_completion_error_v1.py", "test_embeddings_error_v1.py", ] @@ -63,6 +65,7 @@ "test_chat_completion.py", "test_get_llm_message_ids.py", "test_chat_completion_error.py", + "test_embeddings_error_v1.py", ] @@ -143,9 +146,9 @@ def set_info(): def openai_server( openai_version, # noqa: F811 openai_clients, - wrap_openai_base_client_process_response, wrap_openai_api_requestor_request, wrap_openai_api_requestor_interpret_response, + wrap_httpx_client_send, ): """ This fixture will either create a mocked backend for testing purposes, or will @@ -165,9 +168,7 @@ def openai_server( yield # Run tests else: # Apply function wrappers to record data - wrap_function_wrapper( - "openai._base_client", "BaseClient._process_response", wrap_openai_base_client_process_response - ) + wrap_function_wrapper("httpx._client", "Client.send", wrap_httpx_client_send) yield # Run tests # Write responses to audit log with open(OPENAI_AUDIT_LOG_FILE, "w") as audit_log_fp: @@ -177,6 +178,43 @@ def openai_server( yield +def bind_send_params(request, *, stream=False, **kwargs): + return request + + +@pytest.fixture(scope="session") +def wrap_httpx_client_send(extract_shortened_prompt): # noqa: F811 + def _wrap_httpx_client_send(wrapped, instance, args, kwargs): + request = bind_send_params(*args, **kwargs) + if not request: + return wrapped(*args, **kwargs) + + params = json.loads(request.content.decode("utf-8")) + prompt = extract_shortened_prompt(params) + + # Send request + response = wrapped(*args, **kwargs) + + if response.status_code >= 400 or response.status_code < 200: + prompt = "error" + + rheaders = getattr(response, "headers") + + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + rheaders.items(), + ) + ) + body = json.loads(response.content.decode("utf-8")) + OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, response.status_code, body # Append response data to log + return response + + return _wrap_httpx_client_send + + @pytest.fixture(scope="session") def wrap_openai_api_requestor_interpret_response(): def _wrap_openai_api_requestor_interpret_response(wrapped, instance, args, kwargs): @@ -235,39 +273,3 @@ def bind_request_params(method, url, params=None, *args, **kwargs): def bind_request_interpret_response_params(result, stream): return result.content.decode("utf-8"), result.status_code, result.headers - - -def bind_base_client_process_response( - cast_to, - options, - response, - stream, - stream_cls, -): - return options, response - - -@pytest.fixture(scope="session") -def wrap_openai_base_client_process_response(extract_shortened_prompt): # noqa: F811 - def _wrap_openai_base_client_process_response(wrapped, instance, args, kwargs): - options, response = bind_base_client_process_response(*args, **kwargs) - if not options: - return wrapped(*args, **kwargs) - - data = getattr(options, "json_data", {}) - prompt = extract_shortened_prompt(data) - rheaders = getattr(response, "headers") - - headers = dict( - filter( - lambda k: k[0].lower() in RECORDED_HEADERS - or k[0].lower().startswith("openai") - or k[0].lower().startswith("x-ratelimit"), - rheaders.items(), - ) - ) - body = json.loads(response.content.decode("utf-8")) - OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, response.status_code, body # Append response data to audit log - return wrapped(*args, **kwargs) - - return _wrap_openai_base_client_process_response diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index 4e582f4638..f2c31b2628 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -63,6 +63,9 @@ "response.headers.ratelimitResetRequests": "7m12s", "response.headers.ratelimitRemainingTokens": 39940, "response.headers.ratelimitRemainingRequests": 199, + "response.headers.ratelimitLimitTokensUsageBased": "", + "response.headers.ratelimitResetTokensUsageBased": "", + "response.headers.ratelimitRemainingTokensUsageBased": "", "vendor": "openAI", "ingest_source": "Python", "response.number_of_messages": 3, @@ -179,6 +182,9 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): "response.headers.ratelimitResetRequests": "7m12s", "response.headers.ratelimitRemainingTokens": 39940, "response.headers.ratelimitRemainingRequests": 199, + "response.headers.ratelimitLimitTokensUsageBased": "", + "response.headers.ratelimitResetTokensUsageBased": "", + "response.headers.ratelimitRemainingTokensUsageBased": "", "vendor": "openAI", "ingest_source": "Python", "response.number_of_messages": 3, diff --git a/tests/mlmodel_openai/test_chat_completion_error_v1.py b/tests/mlmodel_openai/test_chat_completion_error_v1.py new file mode 100644 index 0000000000..70dc58f998 --- /dev/null +++ b/tests/mlmodel_openai/test_chat_completion_error_v1.py @@ -0,0 +1,416 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import openai +import pytest +from testing_support.fixtures import ( + dt_enabled, + reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_span_events import validate_span_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.api.background_task import background_task +from newrelic.api.transaction import add_custom_attribute +from newrelic.common.object_names import callable_name + +_test_openai_chat_completion_messages = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) + +expected_events_on_no_model_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": None, + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "", # No model in this test case + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 2, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "response.model": "", + "sequence": 1, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(TypeError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": {}, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_invalid_request_error_no_model", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_no_model_error) +@validate_custom_event_count(count=3) +@background_task() +def test_chat_completion_invalid_request_error_no_model(set_trace_info, sync_openai_client): + with pytest.raises(TypeError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + sync_openai_client.chat.completions.create( + messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(TypeError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": {}, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_invalid_request_error_no_model_async", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_no_model_error) +@validate_custom_event_count(count=3) +@background_task() +def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_info, async_openai_client): + with pytest.raises(TypeError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + loop.run_until_complete( + async_openai_client.chat.completions.create( + messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +expected_events_on_invalid_model_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "my-awesome-id", + "span_id": None, + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "does-not-exist", + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 1, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Model does not exist.", + "role": "user", + "response.model": "", + "completion_id": None, + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.NotFoundError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_invalid_request_error_invalid_model", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_invalid_model_error) +@validate_custom_event_count(count=2) +@background_task() +def test_chat_completion_invalid_request_error_invalid_model(set_trace_info, sync_openai_client): + with pytest.raises(openai.NotFoundError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + sync_openai_client.chat.completions.create( + model="does-not-exist", + messages=({"role": "user", "content": "Model does not exist."},), + temperature=0.7, + max_tokens=100, + ) + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.NotFoundError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "error.code": "model_not_found", + "http.statusCode": 404, + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_invalid_request_error_invalid_model_async", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_invalid_model_error) +@validate_custom_event_count(count=2) +@background_task() +def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_trace_info, async_openai_client): + with pytest.raises(openai.NotFoundError): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="does-not-exist", + messages=({"role": "user", "content": "Model does not exist."},), + temperature=0.7, + max_tokens=100, + ) + ) + + +expected_events_on_wrong_api_key_error = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "conversation_id": "", + "span_id": None, + "trace_id": "trace-id", + "api_key_last_four_digits": "sk-BEEF", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.organization": "", + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.number_of_messages": 1, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid API key.", + "role": "user", + "completion_id": None, + "response.model": "", + "sequence": 0, + "vendor": "openAI", + "ingest_source": "Python", + }, + ), +] + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 401, + "error.code": "invalid_api_key", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_wrong_api_key_error", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_wrong_api_key_error) +@validate_custom_event_count(count=2) +@background_task() +def test_chat_completion_wrong_api_key_error(monkeypatch, set_trace_info, sync_openai_client): + with pytest.raises(openai.AuthenticationError): + set_trace_info() + monkeypatch.setattr(sync_openai_client, "api_key", "DEADBEEF") + sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", + messages=({"role": "user", "content": "Invalid API key."},), + temperature=0.7, + max_tokens=100, + ) + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 401, + "error.code": "invalid_api_key", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@validate_transaction_metrics( + "test_chat_completion_error_v1:test_chat_completion_wrong_api_key_error_async", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_custom_events(expected_events_on_wrong_api_key_error) +@validate_custom_event_count(count=2) +@background_task() +def test_chat_completion_wrong_api_key_error_async(loop, monkeypatch, set_trace_info, async_openai_client): + with pytest.raises(openai.AuthenticationError): + set_trace_info() + monkeypatch.setattr(async_openai_client, "api_key", "DEADBEEF") + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", + messages=({"role": "user", "content": "Invalid API key."},), + temperature=0.7, + max_tokens=100, + ) + ) diff --git a/tests/mlmodel_openai/test_chat_completion_v1.py b/tests/mlmodel_openai/test_chat_completion_v1.py index ee7b714893..4df977a6c2 100644 --- a/tests/mlmodel_openai/test_chat_completion_v1.py +++ b/tests/mlmodel_openai/test_chat_completion_v1.py @@ -12,21 +12,137 @@ # See the License for the specific language governing permissions and # limitations under the License. -from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 +import openai +from testing_support.fixtures import ( override_application_settings, reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, ) from newrelic.api.background_task import background_task from newrelic.api.transaction import add_custom_attribute +disabled_custom_insights_settings = {"custom_insights_events.enabled": False} + _test_openai_chat_completion_messages = ( {"role": "system", "content": "You are a scientist."}, {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, ) +chat_completion_recorded_events = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.model": "gpt-3.5-turbo-0613", + "response.organization": "new-relic-nkmd8b", + "response.usage.completion_tokens": 82, + "response.usage.total_tokens": 108, + "response.usage.prompt_tokens": 26, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop", + "response.api_type": "", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 40000, + "response.headers.ratelimitResetTokens": "180ms", + "response.headers.ratelimitResetRequests": "11m32.334s", + "response.headers.ratelimitRemainingTokens": 39880, + "response.headers.ratelimitRemainingRequests": 198, + "response.headers.ratelimitLimitTokensUsageBased": 40000, + "response.headers.ratelimitResetTokensUsageBased": "180ms", + "response.headers.ratelimitRemainingTokensUsageBased": 39880, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 3, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-0", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "completion_id": None, + "sequence": 0, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-1", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 1, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-2", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 2, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "is_response": True, + "ingest_source": "Python", + }, + ), +] + @reset_core_stats_engine() +@validate_custom_events(chat_completion_recorded_events) +# One summary event, one system message, one user message, and one response message from the assistant +@validate_custom_event_count(count=4) +@validate_transaction_metrics( + name="test_chat_completion_v1:test_openai_chat_completion_sync_in_txn_with_convo_id", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) @background_task() def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info, sync_openai_client): set_trace_info() @@ -34,3 +150,224 @@ def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info, sync_o sync_openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) + + +chat_completion_recorded_events_no_convo_id = [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "gpt-3.5-turbo", + "response.model": "gpt-3.5-turbo-0613", + "response.organization": "new-relic-nkmd8b", + "response.usage.completion_tokens": 82, + "response.usage.total_tokens": 108, + "response.usage.prompt_tokens": 26, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop", + "response.api_type": "", + "response.headers.llmVersion": "2020-10-01", + "response.headers.ratelimitLimitRequests": 200, + "response.headers.ratelimitLimitTokens": 40000, + "response.headers.ratelimitResetTokens": "180ms", + "response.headers.ratelimitResetRequests": "11m32.334s", + "response.headers.ratelimitRemainingTokens": 39880, + "response.headers.ratelimitRemainingRequests": 198, + "response.headers.ratelimitLimitTokensUsageBased": 40000, + "response.headers.ratelimitResetTokensUsageBased": "180ms", + "response.headers.ratelimitRemainingTokensUsageBased": 39880, + "vendor": "openAI", + "ingest_source": "Python", + "response.number_of_messages": 3, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-0", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "You are a scientist.", + "role": "system", + "completion_id": None, + "sequence": 0, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-1", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 1, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-2", + "appName": "Python Agent Test (mlmodel_openai)", + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 2, + "response.model": "gpt-3.5-turbo-0613", + "vendor": "openAI", + "is_response": True, + "ingest_source": "Python", + }, + ), +] + + +@reset_core_stats_engine() +@validate_custom_events(chat_completion_recorded_events_no_convo_id) +# One summary event, one system message, one user message, and one response message from the assistant +@validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion_v1:test_openai_chat_completion_sync_in_txn_no_convo_id", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@background_task() +def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info, sync_openai_client): + set_trace_info() + sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +def test_openai_chat_completion_sync_outside_txn(sync_openai_client): + add_custom_attribute("conversation_id", "my-awesome-id") + sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@override_application_settings(disabled_custom_insights_settings) +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +@validate_transaction_metrics( + name="test_chat_completion_v1:test_openai_chat_completion_sync_custom_events_insights_disabled", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_sync_custom_events_insights_disabled(set_trace_info, sync_openai_client): + set_trace_info() + sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + + +@reset_core_stats_engine() +@validate_custom_events(chat_completion_recorded_events_no_convo_id) +@validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion_v1:test_openai_chat_completion_async_conversation_id_unset", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@background_task() +def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info, async_openai_client): + set_trace_info() + + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@reset_core_stats_engine() +@validate_custom_events(chat_completion_recorded_events) +@validate_custom_event_count(count=4) +@validate_transaction_metrics( + "test_chat_completion_v1:test_openai_chat_completion_async_conversation_id_set", + scoped_metrics=[("Llm/completion/OpenAI/create", 1)], + rollup_metrics=[("Llm/completion/OpenAI/create", 1)], + background_task=True, +) +@validate_transaction_metrics( + name="test_chat_completion_v1:test_openai_chat_completion_async_conversation_id_set", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_async_conversation_id_set(loop, set_trace_info, async_openai_client): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +def test_openai_chat_completion_async_outside_transaction(loop, async_openai_client): + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) + + +@override_application_settings(disabled_custom_insights_settings) +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +@validate_transaction_metrics( + name="test_chat_completion_v1:test_openai_chat_completion_async_disabled_custom_event_settings", + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@background_task() +def test_openai_chat_completion_async_disabled_custom_event_settings(loop, async_openai_client): + loop.run_until_complete( + async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 + ) + ) diff --git a/tests/mlmodel_openai/test_get_llm_message_ids_v1.py b/tests/mlmodel_openai/test_get_llm_message_ids_v1.py new file mode 100644 index 0000000000..f85a26c2a9 --- /dev/null +++ b/tests/mlmodel_openai/test_get_llm_message_ids_v1.py @@ -0,0 +1,234 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from testing_support.fixtures import ( + reset_core_stats_engine, + validate_custom_event_count, +) + +from newrelic.api.background_task import background_task +from newrelic.api.ml_model import get_llm_message_ids, record_llm_feedback_event +from newrelic.api.transaction import add_custom_attribute, current_transaction + +_test_openai_chat_completion_messages_1 = ( + {"role": "system", "content": "You are a scientist."}, + {"role": "user", "content": "What is 212 degrees Fahrenheit converted to Celsius?"}, +) +_test_openai_chat_completion_messages_2 = ( + {"role": "system", "content": "You are a mathematician."}, + {"role": "user", "content": "What is 1 plus 2?"}, +) +expected_message_ids_1 = [ + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-1", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-2", + }, +] + +expected_message_ids_1_no_conversation_id = [ + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-0", + }, + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-1", + }, + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410ac", + "message_id": "chatcmpl-8TJ9dS50zgQM7XicE8PLnCyEihRug-2", + }, +] +expected_message_ids_2 = [ + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-1", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-2", + }, +] +expected_message_ids_2_no_conversation_id = [ + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-0", + }, + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-1", + }, + { + "conversation_id": "", + "request_id": "f8d0f53b6881c5c0a3698e55f8f410cd", + "message_id": "chatcmpl-87sb95K4EF2nuJRcTs43Tm9ntTeat-2", + }, +] + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_when_nr_message_ids_not_set(): + message_ids = get_llm_message_ids("request-id-1") + assert message_ids == [] + + +@reset_core_stats_engine() +def test_get_llm_message_ids_outside_transaction(): + message_ids = get_llm_message_ids("request-id-1") + assert message_ids == [] + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_mulitple_async(loop, set_trace_info, async_openai_client): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + async def _run(): + res1 = await async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + res2 = await async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + return [res1, res2] + + results = loop.run_until_complete(_run()) + + message_ids = [m for m in get_llm_message_ids(results[0].id)] + assert message_ids == expected_message_ids_1 + + message_ids = [m for m in get_llm_message_ids(results[1].id)] + assert message_ids == expected_message_ids_2 + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +@background_task() +def test_get_llm_message_ids_mulitple_async_no_conversation_id(loop, set_trace_info, async_openai_client): + set_trace_info() + + async def _run(): + res1 = await async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + res2 = await async_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + return [res1, res2] + + results = loop.run_until_complete(_run()) + + message_ids = [m for m in get_llm_message_ids(results[0].id)] + assert message_ids == expected_message_ids_1_no_conversation_id + + message_ids = [m for m in get_llm_message_ids(results[1].id)] + assert message_ids == expected_message_ids_2_no_conversation_id + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +# Three chat completion messages and one chat completion summary for each create call (8 in total) +# Three feedback events for the first create call +@validate_custom_event_count(11) +@background_task() +def test_get_llm_message_ids_mulitple_sync(set_trace_info, sync_openai_client): + set_trace_info() + add_custom_attribute("conversation_id", "my-awesome-id") + + results = sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_1 + + for message_id in message_ids: + record_llm_feedback_event( + category="informative", + rating=1, + message_id=message_id.get("message_id"), + request_id=message_id.get("request_id"), + conversation_id=message_id.get("conversation_id"), + ) + + results = sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_2 + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids + + +@reset_core_stats_engine() +@validate_custom_event_count(11) +@background_task() +def test_get_llm_message_ids_mulitple_sync_no_conversation_id(set_trace_info, sync_openai_client): + set_trace_info() + + results = sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_1_no_conversation_id + + for message_id in message_ids: + record_llm_feedback_event( + category="informative", + rating=1, + message_id=message_id.get("message_id"), + request_id=message_id.get("request_id"), + conversation_id=message_id.get("conversation_id"), + ) + + results = sync_openai_client.chat.completions.create( + model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_2, temperature=0.7, max_tokens=100 + ) + message_ids = [m for m in get_llm_message_ids(results.id)] + assert message_ids == expected_message_ids_2_no_conversation_id + + # Make sure we aren't causing a memory leak. + transaction = current_transaction() + assert not transaction._nr_message_ids diff --git a/tox.ini b/tox.ini index 53020ce3ab..3ba0daf7a2 100644 --- a/tox.ini +++ b/tox.ini @@ -140,7 +140,7 @@ envlist = python-framework_starlette-{py37,py38,py39,py310,py311,pypy38}-starlettelatest, python-framework_strawberry-{py37,py38,py39,py310,py311}-strawberrylatest, python-mlmodel_openai-openai0-{py37,py38,py39,py310,py311,pypy38}, - python-mlmodel_openai-openai1-{py37,py38,py39,py310,py311,pypy38}, + python-mlmodel_openai-openailatest-{py37,py38,py39,py310,py311,pypy38}, python-logger_logging-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}, python-logger_loguru-{py37,py38,py39,py310,py311,pypy38}-logurulatest, python-logger_loguru-py39-loguru{06,05}, @@ -343,7 +343,7 @@ deps = framework_tornado-tornadolatest: tornado framework_tornado-tornadomaster: https://github.com/tornadoweb/tornado/archive/master.zip mlmodel_openai-openai0: openai[datalib]<1.0 - mlmodel_openai-openai1: openai[datalib]<2.0 + mlmodel_openai-openailatest: openai[datalib] mlmodel_openai: protobuf logger_loguru-logurulatest: loguru logger_loguru-loguru06: loguru<0.7 @@ -495,4 +495,4 @@ source = directory = ${TOX_ENV_DIR-.}/htmlcov [coverage:xml] -output = ${TOX_ENV_DIR-.}/coverage.xml \ No newline at end of file +output = ${TOX_ENV_DIR-.}/coverage.xml From 13c3418fa804177e4cd0c06b541b1c6f2d7532db Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Fri, 15 Dec 2023 16:10:38 -0800 Subject: [PATCH 015/199] OpenAI v1 embeddings errors (#1005) * Add embeddings OpenAI v1 support. * Fix errors tests. * Add embeddings OpenAI v1 support. * Fix errors tests. * Add updated tests for compatiblity with new mock server. * Update tox. * Restore chat completion error tests. * Address review comments. * Store converted response object in new var for v1. * Add errors testing. * Async tests. * Fix exc parsing. * Fix auth test. * Fix error message for Python 3.10 +. * Add embeddings OpenAI v1 support. * Fix errors tests. * Add updated tests for compatiblity with new mock server. * Add embeddings OpenAI v1 support. * Fix errors tests. * Update tox. * Restore chat completion error tests. * Address review comments. * Store converted response object in new var for v1. * Add errors testing. * Async tests. * Fix exc parsing. * Fix auth test. * Fix error message for Python 3.10 +. * Merge conflicts. * Merge conflicts. * status code. * Update mock server. * [Mega-Linter] Apply linters fixes * Trigger tests * Remove embedding error v1 tests from ignore list * Fix invalid request tests. * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: umaannamalai Co-authored-by: Hannah Stepanek --- newrelic/hooks/mlmodel_openai.py | 76 +++-- tests/mlmodel_openai/conftest.py | 1 - .../test_embeddings_error_v1.py | 298 +++++++++++++++++- tox.ini | 2 +- 4 files changed, 353 insertions(+), 24 deletions(-) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index a653b7ca69..7ea277e766 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -59,19 +59,37 @@ def wrap_embedding_sync(wrapped, instance, args, kwargs): try: response = wrapped(*args, **kwargs) except Exception as exc: - notice_error_attributes = { - "http.statusCode": getattr(exc, "http_status", ""), - "error.message": getattr(exc, "_message", ""), - "error.code": getattr(getattr(exc, "error", ""), "code", ""), - "error.param": getattr(exc, "param", ""), - "embedding_id": embedding_id, - } - exc._nr_message = notice_error_attributes.pop("error.message") + if OPENAI_V1: + response = getattr(exc, "response", "") + response_headers = getattr(response, "headers", "") + exc_organization = response_headers.get("openai-organization", "") if response_headers else "" + # There appears to be a bug here in openai v1 where despite having code, + # param, etc in the error response, they are not populated on the exception + # object so grab them from the response body object instead. + body = getattr(exc, "body", {}) or {} + notice_error_attributes = { + "http.statusCode": getattr(exc, "status_code", "") or "", + "error.message": body.get("message", "") or "", + "error.code": body.get("code", "") or "", + "error.param": body.get("param", "") or "", + "embedding_id": embedding_id, + } + else: + exc_organization = getattr(exc, "organization", "") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "embedding_id": embedding_id, + } + message = notice_error_attributes.pop("error.message") + if message: + exc._nr_message = message ft.notice_error( attributes=notice_error_attributes, ) - # Gather attributes to add to embedding summary event in error context - exc_organization = getattr(exc, "organization", "") + error_embedding_dict = { "id": embedding_id, "appName": settings.app_name, @@ -498,19 +516,37 @@ async def wrap_embedding_async(wrapped, instance, args, kwargs): try: response = await wrapped(*args, **kwargs) except Exception as exc: - notice_error_attributes = { - "http.statusCode": getattr(exc, "http_status", ""), - "error.message": getattr(exc, "_message", ""), - "error.code": getattr(getattr(exc, "error", ""), "code", ""), - "error.param": getattr(exc, "param", ""), - "embedding_id": embedding_id, - } - exc._nr_message = notice_error_attributes.pop("error.message") + if OPENAI_V1: + response = getattr(exc, "response", "") + response_headers = getattr(response, "headers", "") + exc_organization = response_headers.get("openai-organization", "") if response_headers else "" + # There appears to be a bug here in openai v1 where despite having code, + # param, etc in the error response, they are not populated on the exception + # object so grab them from the response body object instead. + body = getattr(exc, "body", {}) or {} + notice_error_attributes = { + "http.statusCode": getattr(exc, "status_code", "") or "", + "error.message": body.get("message", "") or "", + "error.code": body.get("code", "") or "", + "error.param": body.get("param", "") or "", + "embedding_id": embedding_id, + } + else: + exc_organization = getattr(exc, "organization", "") + notice_error_attributes = { + "http.statusCode": getattr(exc, "http_status", ""), + "error.message": getattr(exc, "_message", ""), + "error.code": getattr(getattr(exc, "error", ""), "code", ""), + "error.param": getattr(exc, "param", ""), + "embedding_id": embedding_id, + } + message = notice_error_attributes.pop("error.message") + if message: + exc._nr_message = message ft.notice_error( attributes=notice_error_attributes, ) - # Gather attributes to add to embedding summary event in error context - exc_organization = getattr(exc, "organization", "") + error_embedding_dict = { "id": embedding_id, "appName": settings.app_name, diff --git a/tests/mlmodel_openai/conftest.py b/tests/mlmodel_openai/conftest.py index 6c0fed0e44..180bec9cc4 100644 --- a/tests/mlmodel_openai/conftest.py +++ b/tests/mlmodel_openai/conftest.py @@ -65,7 +65,6 @@ "test_chat_completion.py", "test_get_llm_message_ids.py", "test_chat_completion_error.py", - "test_embeddings_error_v1.py", ] diff --git a/tests/mlmodel_openai/test_embeddings_error_v1.py b/tests/mlmodel_openai/test_embeddings_error_v1.py index 485723f041..d5cfaf3457 100644 --- a/tests/mlmodel_openai/test_embeddings_error_v1.py +++ b/tests/mlmodel_openai/test_embeddings_error_v1.py @@ -12,17 +12,311 @@ # See the License for the specific language governing permissions and # limitations under the License. +import sys + import openai import pytest -from newrelic.api.background_task import background_task +from testing_support.fixtures import ( + dt_enabled, + reset_core_stats_engine, + validate_custom_event_count, +) +from testing_support.validators.validate_custom_events import validate_custom_events +from testing_support.validators.validate_error_trace_attributes import ( + validate_error_trace_attributes, +) +from testing_support.validators.validate_span_events import validate_span_events +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) +from newrelic.api.background_task import background_task +from newrelic.common.object_names import callable_name # Sync tests: +no_model_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "This is an embedding test with no model.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "", # No model in this test case + "response.organization": "", + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(TypeError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": {}, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "create() missing 1 required keyword-only argument: 'model'" + if sys.version_info < (3, 10) + else "Embeddings.create() missing 1 required keyword-only argument: 'model'", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_invalid_request_error_no_model", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(no_model_events) +@validate_custom_event_count(count=1) +@background_task() +def test_embeddings_invalid_request_error_no_model(set_trace_info, sync_openai_client): + with pytest.raises(TypeError): + set_trace_info() + sync_openai_client.embeddings.create(input="This is an embedding test with no model.") # no model provided + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(TypeError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": {}, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "create() missing 1 required keyword-only argument: 'model'" + if sys.version_info < (3, 10) + else "AsyncEmbeddings.create() missing 1 required keyword-only argument: 'model'", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_invalid_request_error_no_model_async", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(no_model_events) +@validate_custom_event_count(count=1) +@background_task() +def test_embeddings_invalid_request_error_no_model_async(set_trace_info, async_openai_client, loop): + with pytest.raises(TypeError): + set_trace_info() + loop.run_until_complete( + async_openai_client.embeddings.create(input="This is an embedding test with no model.") + ) # no model provided + + +invalid_model_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "Model does not exist.", + "api_key_last_four_digits": "sk-CRET", + "duration": None, # Response time varies each test run + "request.model": "does-not-exist", + "response.organization": None, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.NotFoundError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 404, + "error.code": "model_not_found", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_invalid_request_error_invalid_model", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(invalid_model_events) +@validate_custom_event_count(count=1) @background_task() def test_embeddings_invalid_request_error_invalid_model(set_trace_info, sync_openai_client): - with pytest.raises(openai.InternalServerError): + with pytest.raises(openai.NotFoundError): set_trace_info() sync_openai_client.embeddings.create(input="Model does not exist.", model="does-not-exist") +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.NotFoundError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 404, + "error.code": "model_not_found", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "The model `does-not-exist` does not exist", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_invalid_request_error_invalid_model_async", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(invalid_model_events) +@validate_custom_event_count(count=1) +@background_task() +def test_embeddings_invalid_request_error_invalid_model_async(set_trace_info, async_openai_client, loop): + with pytest.raises(openai.NotFoundError): + set_trace_info() + loop.run_until_complete( + async_openai_client.embeddings.create(input="Model does not exist.", model="does-not-exist") + ) + + +embedding_invalid_key_error_events = [ + ( + {"type": "LlmEmbedding"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (mlmodel_openai)", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "input": "Invalid API key.", + "api_key_last_four_digits": "sk-BEEF", + "duration": None, # Response time varies each test run + "request.model": "text-embedding-ada-002", + "response.organization": None, + "vendor": "openAI", + "ingest_source": "Python", + "error": True, + }, + ), +] + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 401, + "error.code": "invalid_api_key", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_wrong_api_key_error", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(embedding_invalid_key_error_events) +@validate_custom_event_count(count=1) +@background_task() +def test_embeddings_wrong_api_key_error(set_trace_info, monkeypatch, sync_openai_client): + with pytest.raises(openai.AuthenticationError): + set_trace_info() + monkeypatch.setattr(sync_openai_client, "api_key", "DEADBEEF") + sync_openai_client.embeddings.create(input="Invalid API key.", model="text-embedding-ada-002") + + +@dt_enabled +@reset_core_stats_engine() +@validate_error_trace_attributes( + callable_name(openai.AuthenticationError), + exact_attrs={ + "agent": {}, + "intrinsic": {}, + "user": { + "http.statusCode": 401, + "error.code": "invalid_api_key", + }, + }, +) +@validate_span_events( + exact_agents={ + "error.message": "Incorrect API key provided: DEADBEEF. You can find your API key at https://platform.openai.com/account/api-keys.", + } +) +@validate_transaction_metrics( + name="test_embeddings_error_v1:test_embeddings_wrong_api_key_error_async", + scoped_metrics=[("Llm/embedding/OpenAI/create", 1)], + rollup_metrics=[("Llm/embedding/OpenAI/create", 1)], + custom_metrics=[ + ("Python/ML/OpenAI/%s" % openai.__version__, 1), + ], + background_task=True, +) +@validate_custom_events(embedding_invalid_key_error_events) +@validate_custom_event_count(count=1) +@background_task() +def test_embeddings_wrong_api_key_error_async(set_trace_info, monkeypatch, async_openai_client, loop): + with pytest.raises(openai.AuthenticationError): + set_trace_info() + monkeypatch.setattr(async_openai_client, "api_key", "DEADBEEF") + loop.run_until_complete( + async_openai_client.embeddings.create(input="Invalid API key.", model="text-embedding-ada-002") + ) diff --git a/tox.ini b/tox.ini index 3ba0daf7a2..a0827dea61 100644 --- a/tox.ini +++ b/tox.ini @@ -495,4 +495,4 @@ source = directory = ${TOX_ENV_DIR-.}/htmlcov [coverage:xml] -output = ${TOX_ENV_DIR-.}/coverage.xml +output = ${TOX_ENV_DIR-.}/coverage.xml \ No newline at end of file From d74a2d7ef0914cb5ae97028b00f65b093e1a9d73 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Mon, 18 Dec 2023 10:20:12 -0800 Subject: [PATCH 016/199] Mark instrumentation points for SDK (#1009) * Mark instrumentation points for SDK * Remove duplicated assertion * Fixup: assert attribute not function --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> --- newrelic/hooks/external_botocore.py | 7 +++++ newrelic/hooks/mlmodel_openai.py | 31 +++++++++++++------ .../test_bedrock_chat_completion.py | 4 +++ .../test_bedrock_embeddings.py | 6 +++- tests/mlmodel_openai/test_chat_completion.py | 5 +++ tests/mlmodel_openai/test_embeddings.py | 5 +++ 6 files changed, 47 insertions(+), 11 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 12bdfcafe2..561d9011f8 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -549,6 +549,12 @@ def _nr_clientcreator__create_api_method_(wrapped, instance, args, kwargs): return tracer(wrapped) +def _nr_clientcreator__create_methods(wrapped, instance, args, kwargs): + class_attributes = wrapped(*args, **kwargs) + class_attributes["_nr_wrapped"] = True + return class_attributes + + def _bind_make_request_params(operation_model, request_dict, *args, **kwargs): return operation_model, request_dict @@ -579,3 +585,4 @@ def instrument_botocore_endpoint(module): def instrument_botocore_client(module): wrap_function_wrapper(module, "ClientCreator._create_api_method", _nr_clientcreator__create_api_method_) + wrap_function_wrapper(module, "ClientCreator._create_methods", _nr_clientcreator__create_methods) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 7ea277e766..babfaf8bab 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -870,21 +870,33 @@ def wrap_base_client_process_response(wrapped, instance, args, kwargs): def instrument_openai_util(module): - wrap_function_wrapper(module, "convert_to_openai_object", wrap_convert_to_openai_object) + if hasattr(module, "convert_to_openai_object"): + wrap_function_wrapper(module, "convert_to_openai_object", wrap_convert_to_openai_object) + # This is to mark where we instrument so the SDK knows not to instrument them + # again. + setattr(module.convert_to_openai_object, "_nr_wrapped", True) def instrument_openai_api_resources_embedding(module): - if hasattr(module.Embedding, "create"): - wrap_function_wrapper(module, "Embedding.create", wrap_embedding_sync) - if hasattr(module.Embedding, "acreate"): - wrap_function_wrapper(module, "Embedding.acreate", wrap_embedding_async) + if hasattr(module, "Embedding"): + if hasattr(module.Embedding, "create"): + wrap_function_wrapper(module, "Embedding.create", wrap_embedding_sync) + if hasattr(module.Embedding, "acreate"): + wrap_function_wrapper(module, "Embedding.acreate", wrap_embedding_async) + # This is to mark where we instrument so the SDK knows not to instrument them + # again. + setattr(module.Embedding, "_nr_wrapped", True) def instrument_openai_api_resources_chat_completion(module): - if hasattr(module.ChatCompletion, "create"): - wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_sync) - if hasattr(module.ChatCompletion, "acreate"): - wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_async) + if hasattr(module, "ChatCompletion"): + if hasattr(module.ChatCompletion, "create"): + wrap_function_wrapper(module, "ChatCompletion.create", wrap_chat_completion_sync) + if hasattr(module.ChatCompletion, "acreate"): + wrap_function_wrapper(module, "ChatCompletion.acreate", wrap_chat_completion_async) + # This is to mark where we instrument so the SDK knows not to instrument them + # again. + setattr(module.ChatCompletion, "_nr_wrapped", True) def instrument_openai_resources_chat_completions(module): @@ -894,7 +906,6 @@ def instrument_openai_resources_chat_completions(module): wrap_function_wrapper(module, "AsyncCompletions.create", wrap_chat_completion_async) -# OpenAI v1 instrumentation points def instrument_openai_resources_embeddings(module): if hasattr(module, "Embeddings"): if hasattr(module.Embeddings, "create"): diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index 604771c824..efcc7cec05 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -287,3 +287,7 @@ def _test(): exercise_model(prompt="Invalid Token", temperature=0.7, max_tokens=100) _test() + + +def test_bedrock_chat_completion_functions_marked_as_wrapped_for_sdk_compatibility(bedrock_server): + assert bedrock_server._nr_wrapped diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py index 7a5740e465..cc442fc158 100644 --- a/tests/external_botocore/test_bedrock_embeddings.py +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -1,4 +1,4 @@ - # Copyright 2010 New Relic, Inc. +# Copyright 2010 New Relic, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -172,3 +172,7 @@ def _test(): exercise_model(prompt="Invalid Token", temperature=0.7, max_tokens=100) _test() + + +def test_bedrock_chat_completion_functions_marked_as_wrapped_for_sdk_compatibility(bedrock_server): + assert bedrock_server._nr_wrapped diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index f2c31b2628..e141e45e53 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -371,3 +371,8 @@ def test_openai_chat_completion_async_disabled_custom_event_settings(loop): model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) ) + + +def test_openai_chat_completion_functions_marked_as_wrapped_for_sdk_compatibility(): + assert openai.ChatCompletion._nr_wrapped + assert openai.util.convert_to_openai_object._nr_wrapped diff --git a/tests/mlmodel_openai/test_embeddings.py b/tests/mlmodel_openai/test_embeddings.py index ae2c048fc2..65ac33e87d 100644 --- a/tests/mlmodel_openai/test_embeddings.py +++ b/tests/mlmodel_openai/test_embeddings.py @@ -148,3 +148,8 @@ def test_openai_embedding_async_disabled_custom_insights_events(loop): loop.run_until_complete( openai.Embedding.acreate(input="This is an embedding test.", model="text-embedding-ada-002") ) + + +def test_openai_embedding_functions_marked_as_wrapped_for_sdk_compatibility(): + assert openai.Embedding._nr_wrapped + assert openai.util.convert_to_openai_object._nr_wrapped From e80d8c2de71f8e1b15b425d739e57a4eee563ccd Mon Sep 17 00:00:00 2001 From: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Date: Tue, 19 Dec 2023 21:24:01 -0800 Subject: [PATCH 017/199] Langchain vector stores (#1003) * Add early exit for streaming * Add vectorstores for Langchain * Trigger runs * remove py37 and change code to support py38 * Change directory of metadata.source to run on github * Fix flaskrestx testing manually * Remove py312 (for now) * Redirect instrumentation points * First round of test changes * Add test to find uninstrumented models Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Finish reviewer updates * Add SurrealDBStore to vectorstore list * Modified mock server for OpenAI within LangChain Co-authored-by: Hannah Stepanek * Swap out metadata.source in test * Remove commented out code * Remove assert statement --------- Co-authored-by: Hannah Stepanek Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai --- newrelic/config.py | 395 ++++++++++++++++++ newrelic/hooks/mlmodel_langchain.py | 183 ++++++++ .../_mock_external_openai_server.py | 176 ++++++++ tests/mlmodel_langchain/conftest.py | 157 +++++++ tests/mlmodel_langchain/hello.pdf | Bin 0 -> 3991 bytes tests/mlmodel_langchain/test_vectorstore.py | 132 ++++++ tox.ini | 21 +- 7 files changed, 1060 insertions(+), 4 deletions(-) create mode 100644 newrelic/hooks/mlmodel_langchain.py create mode 100644 tests/mlmodel_langchain/_mock_external_openai_server.py create mode 100644 tests/mlmodel_langchain/conftest.py create mode 100644 tests/mlmodel_langchain/hello.pdf create mode 100644 tests/mlmodel_langchain/test_vectorstore.py diff --git a/newrelic/config.py b/newrelic/config.py index 3c6b45b034..1d132b4b3b 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -2073,6 +2073,401 @@ def _process_module_builtin_defaults(): "newrelic.hooks.coroutines_asyncio", "instrument_asyncio_base_events", ) + _process_module_definition( + "langchain_community.vectorstores.docarray.hnsw", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.docarray.in_memory", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.alibabacloud_opensearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.redis.base", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.analyticdb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.annoy", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.astradb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.atlas", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.awadb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.azure_cosmos_db", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.azuresearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.bageldb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.baiducloud_vector_search", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( + "langchain_community.vectorstores.cassandra", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.chroma", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.clarifai", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.clickhouse", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.dashvector", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.databricks_vector_search", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.deeplake", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.dingo", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.elastic_vector_search", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.elasticsearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.epsilla", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.faiss", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.hippo", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.hologres", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.lancedb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.llm_rails", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.marqo", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.matching_engine", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.meilisearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.milvus", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.momento_vector_index", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.mongodb_atlas", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.myscale", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.neo4j_vector", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.nucliadb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.opensearch_vector_search", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.pgembedding", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.pgvecto_rs", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.pgvector", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.pinecone", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.qdrant", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.rocksetdb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.scann", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.semadb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.singlestoredb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.sklearn", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.sqlitevss", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.starrocks", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.supabase", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.surrealdb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.tair", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.tencentvectordb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.tigris", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.tiledb", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.timescalevector", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.typesense", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.usearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.vald", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.vearch", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.vectara", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.vespa", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.weaviate", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.xata", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.yellowbrick", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + + _process_module_definition( + "langchain_community.vectorstores.zep", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( "asyncio.events", "newrelic.hooks.coroutines_asyncio", diff --git a/newrelic/hooks/mlmodel_langchain.py b/newrelic/hooks/mlmodel_langchain.py new file mode 100644 index 0000000000..2b2e5d232d --- /dev/null +++ b/newrelic/hooks/mlmodel_langchain.py @@ -0,0 +1,183 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import uuid + +from newrelic.api.function_trace import FunctionTrace +from newrelic.api.time_trace import get_trace_linking_metadata +from newrelic.api.transaction import current_transaction +from newrelic.common.object_names import callable_name +from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version + +LANGCHAIN_VERSION = get_package_version("langchain") + +VECTORSTORE_CLASSES = { + "langchain_community.vectorstores.alibabacloud_opensearch": "AlibabaCloudOpenSearch", + "langchain_community.vectorstores.analyticdb": "AnalyticDB", + "langchain_community.vectorstores.annoy": "Annoy", + "langchain_community.vectorstores.astradb": "AstraDB", + "langchain_community.vectorstores.atlas": "AtlasDB", + "langchain_community.vectorstores.awadb": "AwaDB", + "langchain_community.vectorstores.azure_cosmos_db": "AzureCosmosDBVectorSearch", + "langchain_community.vectorstores.azuresearch": "AzureSearch", + "langchain_community.vectorstores.bageldb": "Bagel", + "langchain_community.vectorstores.baiducloud_vector_search": "BESVectorStore", + "langchain_community.vectorstores.cassandra": "Cassandra", + "langchain_community.vectorstores.chroma": "Chroma", + "langchain_community.vectorstores.clarifai": "Clarifai", + "langchain_community.vectorstores.clickhouse": "Clickhouse", + "langchain_community.vectorstores.dashvector": "DashVector", + "langchain_community.vectorstores.databricks_vector_search": "DatabricksVectorSearch", + "langchain_community.vectorstores.deeplake": "DeepLake", + "langchain_community.vectorstores.dingo": "Dingo", + "langchain_community.vectorstores.elastic_vector_search": "ElasticVectorSearch", + # "langchain_community.vectorstores.elastic_vector_search": "ElasticKnnSearch", # Deprecated + "langchain_community.vectorstores.elasticsearch": "ElasticsearchStore", + "langchain_community.vectorstores.epsilla": "Epsilla", + "langchain_community.vectorstores.faiss": "FAISS", + "langchain_community.vectorstores.hippo": "Hippo", + "langchain_community.vectorstores.hologres": "Hologres", + "langchain_community.vectorstores.lancedb": "LanceDB", + "langchain_community.vectorstores.llm_rails": "LLMRails", + "langchain_community.vectorstores.marqo": "Marqo", + "langchain_community.vectorstores.matching_engine": "MatchingEngine", + "langchain_community.vectorstores.meilisearch": "Meilisearch", + "langchain_community.vectorstores.milvus": "Milvus", + "langchain_community.vectorstores.momento_vector_index": "MomentoVectorIndex", + "langchain_community.vectorstores.mongodb_atlas": "MongoDBAtlasVectorSearch", + "langchain_community.vectorstores.myscale": "MyScale", + "langchain_community.vectorstores.neo4j_vector": "Neo4jVector", + "langchain_community.vectorstores.nucliadb": "NucliaDB", + "langchain_community.vectorstores.opensearch_vector_search": "OpenSearchVectorSearch", + "langchain_community.vectorstores.pgembedding": "PGEmbedding", + "langchain_community.vectorstores.pgvecto_rs": "PGVecto_rs", + "langchain_community.vectorstores.pgvector": "PGVector", + "langchain_community.vectorstores.pinecone": "Pinecone", + "langchain_community.vectorstores.qdrant": "Qdrant", + "langchain_community.vectorstores.redis.base": "Redis", + "langchain_community.vectorstores.rocksetdb": "Rockset", + "langchain_community.vectorstores.scann": "ScaNN", + "langchain_community.vectorstores.semadb": "SemaDB", + "langchain_community.vectorstores.singlestoredb": "SingleStoreDB", + "langchain_community.vectorstores.sklearn": "SKLearnVectorStore", + "langchain_community.vectorstores.sqlitevss": "SQLiteVSS", + "langchain_community.vectorstores.starrocks": "StarRocks", + "langchain_community.vectorstores.supabase": "SupabaseVectorStore", + "langchain_community.vectorstores.surrealdb": "SurrealDBStore", + "langchain_community.vectorstores.tair": "Tair", + "langchain_community.vectorstores.tencentvectordb": "TencentVectorDB", + "langchain_community.vectorstores.tigris": "Tigris", + "langchain_community.vectorstores.tiledb": "TileDB", + "langchain_community.vectorstores.timescalevector": "TimescaleVector", + "langchain_community.vectorstores.typesense": "Typesense", + "langchain_community.vectorstores.usearch": "USearch", + "langchain_community.vectorstores.vald": "Vald", + "langchain_community.vectorstores.vearch": "Vearch", + "langchain_community.vectorstores.vectara": "Vectara", + "langchain_community.vectorstores.vespa": "VespaStore", + "langchain_community.vectorstores.weaviate": "Weaviate", + "langchain_community.vectorstores.xata": "XataVectorStore", + "langchain_community.vectorstores.yellowbrick": "Yellowbrick", + "langchain_community.vectorstores.zep": "ZepVectorStore", + "langchain_community.vectorstores.docarray.hnsw": "DocArrayHnswSearch", + "langchain_community.vectorstores.docarray.in_memory": "DocArrayInMemorySearch", +} + + +def bind_similarity_search(query, k, *args, **kwargs): + return query, k + + +def wrap_similarity_search(wrapped, instance, args, kwargs): + transaction = current_transaction() + if not transaction: + return wrapped(*args, **kwargs) + + request_query, request_k = bind_similarity_search(*args, **kwargs) + function_name = callable_name(wrapped) + with FunctionTrace(name=function_name) as ft: + try: + response = wrapped(*args, **kwargs) + available_metadata = get_trace_linking_metadata() + except Exception as exc: + # Error logic goes here + pass + + if not response: + return response + + # LLMVectorSearch + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + transaction_id = transaction.guid + id = str(uuid.uuid4()) + request_query, request_k = bind_similarity_search(*args, **kwargs) + duration = ft.duration + response_number_of_documents = len(response) + + # Only in LlmVectorSearch dict + LLMVectorSearch_dict = { + "request.query": request_query, + "request.k": request_k, + "duration": duration, + "response.number_of_documents": response_number_of_documents, + } + + # In both LlmVectorSearch and LlmVectorSearchResult dicts + LLMVectorSearch_union_dict = { + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction_id, + "id": id, + "vendor": "langchain", + "ingest_source": "Python", + "appName": transaction._application._name, + } + + LLMVectorSearch_dict.update(LLMVectorSearch_union_dict) + transaction.record_custom_event("LlmVectorSearch", LLMVectorSearch_dict) + + # LLMVectorSearchResult + for index, doc in enumerate(response): + search_id = str(uuid.uuid4()) + sequence = index + page_content = getattr(doc, "page_content", "") + metadata = getattr(doc, "metadata", "") + + metadata_dict = {"metadata.%s" % key: value for key, value in metadata.items()} + + LLMVectorSearchResult_dict = { + "search_id": search_id, + "sequence": sequence, + "page_content": page_content, + } + + LLMVectorSearchResult_dict.update(LLMVectorSearch_union_dict) + LLMVectorSearchResult_dict.update(metadata_dict) + # This works in Python 3.9.8+ + # https://peps.python.org/pep-0584/ + # LLMVectorSearchResult_dict |= LLMVectorSearch_dict + # LLMVectorSearchResult_dict |= metadata_dict + + transaction.record_custom_event("LlmVectorSearchResult", LLMVectorSearchResult_dict) + transaction.add_ml_model_info("Langchain", LANGCHAIN_VERSION) + + return response + + +def instrument_langchain_vectorstore_similarity_search(module): + vector_class = VECTORSTORE_CLASSES.get(module.__name__) + if vector_class and hasattr(getattr(module, vector_class, ""), "similarity_search"): + wrap_function_wrapper(module, "%s.similarity_search" % vector_class, wrap_similarity_search) diff --git a/tests/mlmodel_langchain/_mock_external_openai_server.py b/tests/mlmodel_langchain/_mock_external_openai_server.py new file mode 100644 index 0000000000..b4dba7ebf5 --- /dev/null +++ b/tests/mlmodel_langchain/_mock_external_openai_server.py @@ -0,0 +1,176 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json + +import pytest +from testing_support.mock_external_http_server import MockExternalHTTPServer + +from newrelic.common.package_version_utils import get_package_version_tuple + +# This defines an external server test apps can make requests to instead of +# the real OpenAI backend. This provides 3 features: +# +# 1) This removes dependencies on external websites. +# 2) Provides a better mechanism for making an external call in a test app than +# simple calling another endpoint the test app makes available because this +# server will not be instrumented meaning we don't have to sort through +# transactions to separate the ones created in the test app and the ones +# created by an external call. +# 3) This app runs on a separate thread meaning it won't block the test app. + +RESPONSES_V1 = { + "9906": [ + { + "content-type": "application/json", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "23", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "3000", + "x-ratelimit-limit-tokens": "1000000", + "x-ratelimit-remaining-requests": "2999", + "x-ratelimit-remaining-tokens": "999996", + "x-ratelimit-reset-requests": "20ms", + "x-ratelimit-reset-tokens": "0s", + "x-request-id": "058b2dd82590aa4145e97c2e59681f62", + }, + 200, + { + "object": "list", + "data": [ + { + "object": "embedding", + "index": 0, + "embedding": "0TB/Ov96cDsiAKC8oBytvE/gdrsckEQ6CG5svFFCLDz4Vr+7jCAqvXdNdzx16EY8T5m2vJtdLLxfhxM7gEDzO8tQkzzAITe8b08bPIYd5DzO07O8593cO8+EDrsRy4I7jI2/vAcnrDvjPMw7ElaIvB/qFD2P5w+9kJvlPMLKrLzMl1O8DAwCvAxTwjuP54+7OsMIuu26TbxXjLI8ByesvHCWWzydczc7dF3BO6CJwjkzeQK9vQssPI42NTqPVKW8REEKO7GVjzx42Hw8xXOiuzhh07wrYLE8JDwAvS0Jp7oezKS8zxr0PEs/5jwNBB28NFMtvMKGZzt1wvG8pFAoPInkSbyZjuE7AmirOx1BHzzDN0K8cSHhPNCl+Ty5k2u8yp84vDjOaLzyDLk8jlyKO1FrfLywd587qi0ZPN0QNryGYak8fFC9vLZ94LuRkIU8x7L9OwHdJTwDhpu7sKDvPLajtbx7Ms28eRzCOp2ZjDoRpa07ZNx5PGMoJLzrL8g7KkJBvJvwFjzEwke7RK4fvWlyKrxWAS281c4UvNX3ZLz0SBm8+k7au3YsDDzoaGI7+ZqEPPatSTuNPpq8vXjBPGHsQ7yLb8+8D48iO5OxcLsb32k80KX5O+ShfLtErp+8L5SsPP07FT3C8IE8eYnXvH/5MjwHupY6EK0SvWVkhLzW7AS826uFvPGfIz2dczc8z8tOPB+Aejufa9K8bsSVPHj+UTlFOaU8kZAFvA5L3bwv22w86YZSPG/ihTxOv4u82QKQPDu7ozwqhoY8hJJevIBA8ztSYBy8EsMdPBpUZDxs9co7TTQGvH0q6Do3hyg8fJ9iO2wboDwot7s7vryGvHrNnLrLUBO8SSnbu4cSBL2e4Ew8JTSbPOmG0jxdJV47arlqvHBti7zZmHW8q0sJPIZhKb3mcMc8glZ+vOqkwjuBoqi7lcSAPKb5nbw2/KK8GMnevE00BjylAQM8y3njPDW43brZ3Do6O06OPERBirtcmlg8D2lNvKUBAzzzcek84mKhPMhjWDy0GDC/PmSZu8VzIjxfYT480nTEu3j+UTyTG4s8y1CTPIPeiLu0PoU8YruOu2z1SjyiFEi7ZY1UvPZeJLzV92S8K83Guq47v7weObq8PUYpPMM3QrwUKM48nA4HvFVQUrw4OIM8jYVaOuisJ7l+H4i8TTQGOoVDuTxbLUO8/1GgPMZrPTx16Ea6MIxHPR2uNDzLKr67QgUqPCLayjuONrU8z/EjvEK2hLxGpjo7P8nJvHvFN7zLeeM7frXtPDN8/Tv4Vr+7rmGUu0amujxnyTS8ApF7PPZeJDyvFWo7AmirO6rAAz14/tE7syAVu20TOzsMD326gsCYOj+CiTqDB1k8rs6pvDM1vTwkFqu8+2xKPG9Pm7x+bi28XHGIurhyALzDEW2802xfvEJyvzzuHAM9JfDVPGClA7v8ZGW8fQGYPJgDXDxITzA7QA2PvA3A17wwspw8WPnHu5Xt0Lz7bMo6pL29uZFMwDutiuQ8I4slPN7BkLyS18W7q0sJPTGqtzvR6b67WKoiPPME1Dwx0Iy7EhLDO5QTJrzT/0m8nFVHO8ccmDwEzVu70uFZvGVkBD3xnyM9ZWf/vHOsZjuwCgq8VeM8veCT1jwUKM46hxIEvfX87ruFQ7k7dMrWPDN8fby9MYE8RcwPPKnp0zy7z0u8vFpRPB+Aeju9NHy8FQL5O+HXG7xljVS8TBaWPPOXvjrrwrI8UUIsvH5I2DsCaKu70TB/PKLFIrxowU889xJ6OZ2ZDLyZIcy7poyIPOrKl7zGkZI8c6zmvAzmrLwp/vs6TiwhOuchIrxJ2jU8vIAmvNqNFb1gEpk7J5lLuxtJBLxy0rs7FLu4vMJdF70xZvK89q3JuinVqzxLP2Y7frXtuqUBAzvVis+8tD6FvKGnMjykl2i7TiwhvZDBujx0Dhy87x9+vOAAbDoWs9O7qi2ZO9kCkLyF1iO8bsSVvAKR+7vNSK66O7ujuyn7gLz+M7A7W+YCPYooDzvmA7I72QKQPBfRQ7wSEkO4DQQdPJvwFjyZIcy8uAhmOsPoHLwP1uK52klQPBLDHbxxIWE8prXYPNCl+Tx764y6powIPV5DzrzfTBa79WYJvag4+TsaBb+8ysUNOyn7gDyBoig8BnZRvIXWI7uJCh88eYnXPJi0tjyNPho7OgpJvO5rqLzaIAC86PtMvBaKgzywM1q8LQmnu59CArq0PgU95J4BPNwYGz1pcqo7eRzCvGEwCb24coA8N4coPFEc17uj45K8OPS9u9XOlDwEzVu8gIQ4PHC8MDz4w1S8OgpJPEbt+jzchbC80S0EPI2FWjx9Kmg8WD0NOgYJvLkeps887HMNO1V2p7qOXAq8LBEMO4OaQ7zviRi9jNT/u8C0IbyRkAU8BS8RPaKBXTxV4zw8O06OOylolrmkl+g7T+B2vOCT1juKvnS8hJLeu29Pm7xVvWe8jNT/u3Xoxjw++n68f/myOzLIJ7vEnHK7H1eqO2z1SjxOVfE7z/GjvAqEd7xUWDe7sDNaPJEma7rLvSi8W+YCvUkACzzXDfA7FChOu5JqsLyY2gs8YKUDPN/fAL3fdWY8ZCA/OyG82jx0XcG8OgpJOee0DLzbq4U8qenTO6Zms7wHupa87HONPB71dLuaGec6KSTRuw9pTbuTsfA56+gHPN2jIDwpaBa7y1ATPKAcrTxx2iC6GyMvOug/EjwdG8o8q7geu9pJ0Ls4zmi7X87TvGq5arzl5UE902xfPI2rr7pS84Y8y1CTvHx2ErzQpfm8yGPYvHckJ7ynF4472iP7uk/g9juhOh07k4ggPKmiEzwXgh66JujwOWHGbrwuJ5e6637tu2h6j7sIAdc7/RVAO3CWWzyvWa88CEWcO3x2Ejxtz/U7zbVDvPc4z7xkRhS6mNqLuw/8NzzMl1M8kHIVuxz92Two3ZA8tYVFPRu2mTsF60u8bPVKPGB/LjzgJsG79WYJvEGYlDto56S5RBs1O16wYzwnLDa8vrwGvVkXuDxQJLw7Juhwu92joLxFYnW83X3LO+LPtrsKhPe7vZ6WvCe/IL1rRPC8mAPcvLGVj7zem7u7nS9yvPGfI70gCAW9CWMMPArQobzgaga8hvSTu5UxFr3JFLO8OlluPAG30DycVUc8EBoovGwbILxGVxU8cSHhO4zUf7uLSfq8aOckPN8I0btNDrE8VpQXuqRQqDwp+4C75nDHu70xAT0iACA7rqhUPEHnuTwOcTK8YVnZO8Ok1zv4Vr86WsgSvBtyVLzJ7t07LBEMvH9mSLy2o7U5OsMIPIMHWTsZ5048kC7QPAPzsDxYPY28V7KHOYyNPz0++n68z/GjPHC8MLlzgxa8mSHMPG/iBT21NqA8BuNmvA2XB725aps5xAaNvC8BQjzOZp48q3RZuiP4Ortwllu8nXO3vAqEdzrtlPi6w+icu8oyozvA+2G8+XSvOFxLs7w6w4i5uh7xOD5kmTyxUUq8wzfCu3Eh4byOXIq85AuXOcMRbbyJ5Em93TYLvV5DTrztus079EgZvMGsPDymtVi8GMleO5dPhryjMji74mKhO/olCr3aIAC7ye5dPN9MljwF60s8eNUBPUhPsDsfgPo7X4eTO/mdfzvem7u7jRhFvG8L1rw7KLm84LmrPKRQqDwx0Aw9P4KJuzVLSDsJ+XE8W+l9OXjVAbxE12+7i29POzkSLrzG/qe88VteuT9cNDrKnzg8B7qWO7dUELxbLcM7ysWNOxyQRDwdrjQ8aFS6PKVIw7sGdtE8U+shPNtnQDsfgHq8nS9yu7ebULuwoG88cxYBPJXHezytQ6Q8vKl2vGz1SrsvlCw702zfPCQWKzx2c0w7URzXu2tEcLpXSO07cbRLvIHL+Lv1QDS8JceFOotJery79aC8HUEfPCLaSrwkPAC9YKWDu23PdbnNSC49q7iePHvrDDwFfrY82W+lO8nu3TsXgp48lymxvO+JmLoeXw89c/CruqQq07us/168dKGGOu8ffjszeYK7ZEYUvdpJ0Lolg8A8YKUDu70LrLwkqZU7x68CvZFMQDx+tW07iQqfvDvkc7wGCTw8OlnuvAxTQjz9O5W8ULemPFEc1zwo3RC8mAPcOggBV7thMAk8mANcurZ9YLyNhdo8H1eqvJG5VTy9NHw8FxWJu4gz77pCcj+7uf2FvE8GzDyXKbG8kxuLO/Gfo7tvT5s84+0mvOe0DDywoO+7ty47u2c2yrplZIS8TPDAPKAcLTyfkSe7TcrruyjdkLyVxIC8DHkXvYMtLjugRf075AuXvF5pIzz0SJm7Hjk6POxzDTzHia08zfmIu5wOhzxG7fo83RC2vM8a9Lv2h3Q8sVFKPG05kLzAtKE8Pvr+uryp9rpP4PY7MB8yvABwkLz4Vr87mhnnOtkCkDvG/qe7gaIoPHOs5jyzIJU8v3DcO50vcrwKPbe8xif4PLU2ILt/jB28mj+8OqySSbxduEg8uEwrvI6jSru8E5G7k7HwvO5rKLwYyd465imHOtSwpDs5pRi8prXYvHo6MjqGHeS8BKSLO9YV1Tu8gCa8zUiuuxsjLzv3Eno8sVHKuk9KETygr5e71w3wu9RDjzkRy4K8EWFovH7bwruybzo9BpwmPNczxTuVxAC8PUYpvDUECL1XH528pJfougVYYToMeZc7kHKVPCnVKzu0PoU8/jOwuvEyDjyI7C686D+SvAwMgrouugG8dXuxPNX35LvxW968M6JSO8yXUzs1cZ08s7Z6Ow/8tztsiDW8kxuLu7HktLwSw528JKmVOmhUOjzrfm084GqGvAwMgrseXw883RC2O2VkBDsYXMm8JYNAOoIPvry2EMs8bRM7vC4nlztFYnU8thDLvH5I2Dw+0S685imHPNcN8DywM1o8mLS2O6Pjkrq5Jta7jCCquSVdazz46Sm6cSFhO2uuCjz+oEW8tqO1vKcXjryONrU6xU3NvD/vHrwrOtw75KH8PKJYjTxPShG9wdKRvGA76byl2y0844ARPFProbzFc6K6AbdQvEMjmjpgpQM8s/q/vMevgrsamKk8Sz9mPNRDD7qmtdg8kSZrPvVmCbywCoo71hXVPDFm8jwFWGE8BetLPDRTrbtBweQ7UCS8O89eObyNhdq7GMlevBeCnjvnjrc768IyPAeUwbxlZ/+84ovxvOxzjbzRLYQ7/1GgvKHNh7wD87C8ukRGPCMekDtQkVG8z4SOu32UAj29npa6IbxavJhHobt+tW07F9HDPFo1qLwzolK85yEiPWq5ajy9MQE905I0uxAaqLwK0KG8Jg5Gu23P9TstxWE7BycsPI2F2rv7sA+94ADsO8ey/TyIWcS7oEV9PImdiTzIOgg9aS5lPMu9qLy4ucA8ZlyfPPtsSrza+iq8c6xmu9MlHz2QLtC7FUa+POo3rbygRX27/jMwvWr9L7sHupa6RNdvvAvukbwmobC7LrqBu2HG7jrwgbO8AUo7vLICJTxUxcw73X3LPGku5TxI4pq8iigPvJOIILxNyuu8S6kAvUSuH700Uy08XJpYvI6jSrwT4Y28OlnuOzowHrwcau+5X85TPP6gRTwyWxI8Nmm4Ow3AVzxVvee7AUo7vFZuwrvdNos8l7wbPKrAg7t3TXe8baYlPDdD47tUMmI87HONOw5xsrt9lAK92iCAvMevArotxeE7h6jpvAG30Du79aC8ApH7uYjsrjvcX1u8l5bGOmz1Srwxqje8I/g6PHe3kbrRMH+8P++eu30qaDx4/lE8MT0iverKlzpunkA79xL6POj7TDzAIbc7fSpoPKPjkjvJFDM7nHucu5JqsLt9vVI7piJuvD7RLjzaI3u84LkrvGTc+bweps88Ru36vBD00rvuayg6NxoTvfmaBLpANl+8PG/5u2yINT3D6Jy7LBEMvMsqvrzoaOI7Im01uzN5grxCBao75eXBPMYn+LtRQiy9k7HwvHivLL789088ehRdPOSegbwi2ko8+9lfO4ZhKTy+vIY8ctI7O3jY/Dux5LS6z/GjvJqDAbxrrgq9MWbyuyQ8gLzviRi8ygzOtjVxHT15YIc8hLgzPbMglbvWWZo7zmYeux9XqjtGE1C87muoO4kKHz0kPAC7Qee5vNNsX7za+qq8UdUWPXMWgTyEkt48HvX0u3yfYjvfTBa9/jMwvNSwpDwhvNo8WjUoPIhZRDzYUTW8e8W3vGJ3ybs6MJ4818Yvu2ASmTz9FUC83PJFPHtYorvO0zO7jRjFumHG7jzHia08tqO1u6/smTyM+lS7JNLlu1iqIrzkoXy8RTmlu0naNbzZmPW7DAwCumyINbxG7fo7fkhYvGOVuTyMICq8HvX0vAo9N7xWJwK9ZCC/O24xqzu9nha9xMLHPK2K5DodrjS8sAqKvIzUfzzpGT08cdogPHPwK7z+MzA8f4wdvIE1Ezzp8+e8U+uhvG7ElTwVRr68pFCou35urbwJY4w7qDh5PCTS5TsV2ag8pCpTvA5LXbxFOSU6uN+VPAljjLwrzca6fQEYPfFbXrz5dK88vTT8O34fCD2kUKi8t1QQvD0g1DtpLuU85463PL0xgTx0ytY8RleVuw3AV7wHuha7aFS6ukIFKj1/Zkg82iP7vOldAjyVMZa7pCpTOjaPjb2aGee8qpouOXrNnDwIRRw8zNsYPUOQr7tHMcA7wPthujowHjxQt6Y7PqtZvC/b7DuyAqW7l08GPdfGr7xQt6Y8NxqTvJ9r0rvTJZ88uf0FuyMeELzPy048hxKEvKu4nrxUWDc8AY6AvGtE8DxTp1w77ti9u6jxuLxKtOC8S9JQu0K2BLyEuDO6UmCcPOBqBr1iUXQ8yGNYPDEXzbzleKw8KfuAPIq+9LuJnQm98KeIuzW4XTzG/ic7uh7xPEA23zuixaK83sGQvJaeK71KR8u8fHaSPG05kDubXSy74vWLPHCW2zwb32m7vKn2O7XJCjxksyk7KWgWvbgI5jwBjgC8U6dcPM/LTryhp7I8AAb2PPwdpTsnUou8jja1PJ+RJ7wOS908P+8evZ4kkjwFLxG8GXq5vNaoPzxG7Xo8TlXxvGhUurw0wMK6M+YXvKZmszzgaoY8cxYBvPl0rztHxCq8Z8k0va/sGbyzIJU857SMPF5DTrw/gok7ipWkPDpZbrrHiS27QnK/PEhPsDxLqQC9j1SlvM7Ts70J+fE8nKTsu7qIi7wp1as8uEyruZmOYTx+tW282o0VvLONKrt2maG8m8pBu/FbXryqw348K83GvG3P9bvmKQc9d5G8PM5Aybsc/Vm7OlnuOp4kkrye4Ey8wLShO0fEKry6HvE7f4wdvZAuULsQh725LrqBvLjflTtmXB+9VicCvEbt+jzrL8g7NUvIux7MpDuONrU8woZnvBLDnTx42Py861WdvNlvpTzguSs8GedOu1zenTqtQyQ76GjiOrZ94DwQ9FI8lDz2O7lqmzzF4Le8jPpUu/VmibzntIy8mY5hvJCbZbphxm67vO07veMW9zzZAhC8/sYaPdQdurv1/G47pFAoOxHLgrwBjgC8sAqKuwjYhryWnqu7AkJWvG05EDyRTMA8mY7hPOxNuLh6OjK8YnfJPEltIL3Rmhk93TYLPXXC8TuRJuu81ffkOXxQvTxhnR48frXtOurKlzrM25i7UdWWOyjdEL3hRDG8DktdO7wTkTxtORA8RTklOy5Q57tDkK86ULemPHsyTTzgJkE8635tPNXOlLtduEg7W+aCuxPhjTp5iVe8/PfPvCqvVjyLSXo78wRUOuJioTwFLxG7E3dzPBtJBL3Hsv04v3DcPGK7jjtrrgq9qaITPcPoHLxIvMU7+ZoEPRJWiLyF1qM7E3fzOEIFqjttz3W8XrDjPOho4rsM5qw8AvuVu/fLubsX0cO8RhPQOaySSTjuHIM77kVTPLpERrxk3Hk9JDwAPGqQmrt8n2I8YcZuPLU2oDzPXrk8oK8XPO26TbzA++G8fJ9iu5o/vDuvf4S8ODgDvTFm8rtDI5q7Nd6yvIeoabzBP6e8iMbZPOtVnTw6WW48GXq5PLxa0TuAqo28vKl2vNbsBDwJY4w7yDqIvAwP/bys/948frVtuxorlLyLs5S8SSlbO1OnXDsk0mW7fSrou68V6rtHxCo8CzXSPFvmAjvVO6q8UGgBvfmahDsI2IY8BVjhvAljjLsiR+C8", + } + ], + "model": "text-embedding-ada-002-v2", + "usage": {"prompt_tokens": 4, "total_tokens": 4}, + }, + ], + "12833": [ + { + "content-type": "application/json", + "openai-organization": "new-relic-nkmd8b", + "openai-processing-ms": "26", + "openai-version": "2020-10-01", + "x-ratelimit-limit-requests": "3000", + "x-ratelimit-limit-tokens": "1000000", + "x-ratelimit-remaining-requests": "2999", + "x-ratelimit-remaining-tokens": "999994", + "x-ratelimit-reset-requests": "20ms", + "x-ratelimit-reset-tokens": "0s", + "x-request-id": "d5d71019880e25a94de58b927045a202", + }, + 200, + { + "object": "list", + "data": [ + { + "object": "embedding", + "index": 0, + "embedding": "d4ypOv2yiTxi17k673XCuxSAAb2qjsg8jxGuvNoQ1bs0Xby8L/2bvIAk6TxKUjU8L3UfvLozmrxa94a7e8TIvIoBED0Cw6c44Ih9PKFGi7wb6LC76DWUvFUqfjvmzQm8dAwXvOqNpbxEsgs8JhB8vHiksTv9sgk6ZX9Nu+aliLrQiA688+VbvI7Bqzvh2H+8IQBevICEZLyiDpE8jpmqvFw3ED0lIPU7f+RfPNVgMrxoJ+G8kyHMO6hOvzuKAZC8Yb+xvIoBkDwK89w8L3UfPGX30LyxHnm8znAGvHOUEzuyvn28v7u7PLmTFbz8moG8wSNGu2SPxrvnvZC8fIzOPPJFV7mh5o+76f0Zu7K+/bc2Zcu7oB6KPKxGVTyo/jw8/toKvM/oiTvdGGQ8a6fzO8VbZTt4fLC79RXsuwwj7bpPitQ86hUivVvnDb0zbbU8eFQvPfPlWzv2BXO8/8qRPPC1S7zg6Pi8etTBO9TArTyI+QC8Lb0SPSCYU7w6/WU8DmP2O0/a1jsJ29Q7/WKHvAC7mLvFu2C850WNO2aX1TrOqIC7GnAtu6hOv7o77Ww7L02ePM+Yh7tffyg7Mh0zPQRTMzwXyBm9FRCNu6tWTrwsLQc86DWUvL+7u7sGM8G8rp5mOwWTvDwlcPc7xGvevOb1ijsr3QQ81ni6vJsBf7wioGI8Ok3oumX30Dyhlo27eoS/O6UGp7rQ2JC8JDDuPINU+bxQyt08irGNu94Ia7wxjSc7bDf/PEXylDvO+IK8AUukujIdMzwxLaw84dh/vNfwvbvtDTg4LR2OPMW7YDt9fNU8Y3e+Ozld4TwHI8g81ti1u7vTHj30dec7GzgzvARTMzyNqSM8x5tuPJZR3Lw3pVQ6QzoIPXcErTwBSyQ7OM1VPD3Nerx4HLW8XheePNPQJr0l0PI7lDnUOtx437p8ZM08fzRiPNmoyrwt9Qy9c2ySvNqYUTwrBYY7703BPJUB2juPES47faRWPEYKHbzcKN08MMWhvKQ+oTts5/w8A4stPUmKr7p4VC+/VSr+u45xKTyT+cq6lqHePDgd2Duv7mi6ThLRPFkHADscULs8lMFQvJjR7jwu5ZO75d0CvO/FRDp3LK67DCNtO2M/xLtbD4+8mrF8u9TALb339fk6BZO8uzOVNjwYQB08k0nNuuoVojzO0IG8ZN/IvGoHbzwH+8a8g/T9OwLrqDzBw0o8xVtlPedtjjzQiA68MkU0POz1r7xF8hQ9IqBiO5sBf7zszS68FKgCvTDFITygfoW8ixkYPHVMILxqB288rp7mu4yRmzwhAF68PI3xO71jKjy/a7m7sR75OkXKkzyXQeO7JsD5uxtgNDyoTr+3JSB1PDPlOLxod+O7L02evKsuTTz9EoW8JhB8vGq3bDyU6VE6l0FjvDM1uzsKQ9+8dUygu0yCRTxbl4s8HfA/vKXeJbzrLSo8XP+VPGun8zzJe/y6lQFavKNOGr1+9Ng8uKMOvVvnDb13BC27751DO4F0azwc2Lc8XU+YPKFGC7zvdUK86j0jPHFkA7xCcgK9HrjFPIIEdzuKAZC8UbpkPB2gPbxnh1w8qCY+PNF4FbsjkOm7iRGJvNMwIj3XGL88r+5ovCylijuf3oC8Md2pvBQghjwOY3Y8IWDZvJKBxzzQ2JA8HFC7PP56j7yoJr48FrCRPGOfPzxcrxO8r47tO00iSjxHWp+8BMu2vOiFFr2o/ry7ZufXPHV0IbxSWuk7iHEEu0xaRDyUOVQ8qy7NPI5xKbtz9A45ny6DuvF9Ubyj/he87xVHvAuT4Ttej6E8vJukvLw7KbxHqiG8eKQxu2S3x7rsHbG7T4rUOzaNTLzg6Pg69lV1OSGw27xFQpe7ZufXu/KV2buVsde82FjIO3OUEz270x68/8qRPFlXgrpet6K8qj7GOqv2UjyQAbW7iomMvDkNXzzCY8+8Bbs9vHRcGbwyfa47572QPFPqdLxzRJG8/lKOOnt0xrw17cc7TjpSOujVmDtcX5E8cWQDPTSFvbsDsy49i6GUPNZQubz9ioi66u2gPJEZvTz3RXy8CStXO9LgH7ykjqO8fOzJO44hpzu7gxw8RcoTPJWJVjyKsQ08rv5hPNgIRrx2FCY7FtiSu9kgTjxffyi8kjHFPERSkDy3OwS7aHfjvL0Dr7ulfiq7AAubPNLgHzw5Dd85RmoYOwszZry7gxw70uCfPLJuezy3swc76f0ZvOZVBrxsN3887EWyPFw3ED27qx28IqDivAAzHLwHS0k8qIY5PNLgH7yaEXg8M201vMJjzzwy9bG796X3PB2gPbznHYy7XU8YPF8vpjyISQO9dISaPMjbdzwDY6w8vqMzPHS8FDzRGBo9RfKUvL+TOjwe4Ea8HHg8vDr9ZTx2xKO8U0pwvAhj0Ts6/eU8kWk/Peu1pjwkgPC7Wh+IuustKjxc/xW8B5vLu2kX6Dt4fDC6p+a0vCsFhrz1Fey8DXNvvPiVfjylBqe8CBPPPFp/Azz96gO7iZkFPGRnxbuT0cm6uuOXvPVlbrzJK3o8HNi3PHG0hbw9HX2835j2vNUQMLoa+Km8ZafOu70rsLvSkB08VIp5PGGHN7zRyBe8BoNDPBWYiTz0JeW7fQRSu9OoJTxGapi7c/SOu/1ih7wFG7m8iEkDPHzsybuqPsa66A0TvRnQqDt0XBm8u6udPPQlZTwH08W8ps4svAszZrrEy1k8Q8KEvKy+WDq7IyE9lqFeO5ChOTwtvZK8ZLdHvNvYWrwPo389A2OsOwlT2LtZB4A818i8urdjBT1FohK829javJgx6rp6NL28oyaZvKcOtjkFazu7vJukPLwTKDwiUGC8oPaIOgNjrLppZ+q7RaKSO5EZvbxlH9I7kHm4Ow+jfzvQsA88a0d4vLq7lrs5vdw7t7OHPIvJlbyDpPs6jkkoPKOeHDsiUOC7M+U4PApD3zxs53w6XheePDa1zbrmpYi8sR55PKYerzzamFG8XDeQvNTALTyg9oi8sH50vAfTRTwB06C81EiqugdzSjx8FEu87B2xvGpXcbwt9Yy8lOlRPIAk6buVAVq8eFSvvIzhnbwtbZC84Oj4vKiGuTuwfnS7NK2+vAWTPL07Pe+8iPmAvJP5yrxFyhM8v7u7vHG0hbyrfs+8SbKwOw4T9DzdaGY8lWHVu4kRibwEyza7cYyEu0YyHjwCmya8L50gPIk5CjuZIfG77TU5PLybJDy7gxy8PI1xvMTL2TyirpU8a6dzPPGlUjxc15S8uNsIvEXylDz/QpW7CDtQvP2yiby+8zW7XIeSu6AeirxzlBO8UHrbvHucRzzRUJQ7i8mVu8fr8DwdyD68zziMvPOF4DyNMSC8qP48PKFGC7xHWh88z8CIPAC7mDyIIQI9gcRtu5Wx17yW8eC6fSzTvKKGlDw2Fck8XXcZvFkvgTzXGD88ddScu6t+T7pRCme85Y2AvPC1yzsaICu85qWIvBrAr7xyzA283HhfvJLhwjxhXza8RaKSu41ZobwXeBc8oW4MvEdan7xqV/G8ApsmvWtH+LsGC8A87B2xPNjgxDt3jCm8DCPtutIIIbyu/uE7CkPfO5ZR3LwPo387572QPBfImTxrp3M8HFC7PGjHZbrgOHs8vYsrvBWYCTwcADm8yXt8vF0nl7xn11687iXAPO79vjyPOa882fhMPDgdWLj+eg882hBVuerFnzvzNd676hUivC1FD7yu/mG5/MICPHr8QrynDja7zzgMPAVDOryorjo7IBBXOwcjSDynvjO8pd4lPRogK7yqZsc70gghOz3NertLur+8j7EyvNHIlzxdTxg8IEjRPP/KkTwLM2Y8ejQ9vJfh57vxzVO8U5ryPJQR0zqYMeq7pLaku6iGObxZBwC86j0jvM8Qi7ytXl28TyrZO3m8ObxsN/+71TixPPQl5bwKQ9+8iWELvF4/n7wurRk9LR2OO2eH3Dzn5ZE8FTgOPKG+DrsC6yg8C5PhuncsLrzt5bY8wotQPKpmR7o6/WW8l0FjPHFkAzzQiI68GYCmvNJonLuTcc48Xy+mO/BlybugHoo8ufOQvDa1zTzrBak7yNv3u3mUOLzxBc67WqeEvIsZGDyD9H28pm6xOw8D+zymbjE8nwYCvDjN1Tov1Rq7qcZCu/PlW7z1xek88vXUvH804jvtvbU7z+iJPJGRwLy9KzA89WXuO3FkgzyqPka8B9PFvJ/eALzsHbG4L/0bPKxGVTxH0iI8llHcvHMcELtrp/O7q6bQPDTVv7vI23e7pm6xvHskRLuwfvS8GagnvUaSGbzSkB081CCpu8W7YL2A1Ga8I0BnPPiVfruj/pc8IqDiuxwoujtypIw88kXXu/4qjbqISQO8o3abPAM7q7yCZHK7/8qRu8VbZbyTqcg7NP3Auwvj47yNMSA7UQpnPEmysDywLnI7uFOMOzkNXzwu5ZM8lJnPu+stqjszvbe8i8kVPdjgxLuOcSk8GiCru3KkDLu264E7cnyLvAkD1rsBgx48fXzVvEk6rTv/GpQ7/gIMO76jMzufBgK7COvNvNSYLLy2w4C8dAyXPL6jszt8PMy8lWFVPAFLpDsxtag8L3UfPNaguzqUOdS8MS2svJ8ug7rFW+W8o8YdPIsZGDwj8OQ8ujMaO3e0KrtLQry89NViu/56D72fBoK7koFHu7eLhro17Uc8vQMvPF4XHjvrBSk9wUvHugHToDwMg+i8NU3DOzud6jznRY28fvTYO0OKCrwHS8m8DcNxvEOKijwHI0i8WVcCvTBlpjyN+SU8qP48u14/n7zUSCq8lqHeuja1TTzFu2A8X38ovF8HpbzTMCK7eqxAvFr3hjvTMCK7GBgcu4NUebwA45k8HaA9vFlXArzO+II8mNHuvNx4Xzxx3IY7q6ZQvKfmtDzFW2W8HgjIO1Xae7xzRJE884XgvKNOmjy7gxw49HVnvP5SjrwYaB685lWGvBnQqLx11By7XScXvY6ZqrkC66i8grT0POgNkzulfqq8kZFAvJjR7ruhRgu8HWjDPNQgqbxdn5o53CjdvMg7czxpZ+o8ddScvP5SjrzvnUM8xvtpu+iFFjwKQ988r+5oPqBWhLzPOIw6FKgCPfZVdTw7PW+8kAE1PfC1y7svTR67LW2QPGBHrrybAX+8A7OuvIwJHzzQsI88DwN7u4npB72JwQa8iTkKveXdAr1kB0q71RCwuxbYkrugfoW8AsOnPKj+vDrDA9S4YG8vPCylijyYMWo79RXsvATLtrwc2Lc8DCPtPH0EUjpxPAI8S7q/PP0SBTyvjm08O51qPLc7hDtMqkY8ixkYvIyRm7yUOVQ8z+iJPLbrATysvli8SYovPCQw7jytXl28SdqxOxWYCT1blws9S5I+vPyagTxsN3+7k9HJulqnhLsLM+Y6qNa7uy/9Gz2sHlQ85QWEPAg7ULynvjM8alfxvCAQVzqX4Wc88+XbOhXACrvqjSU8pm4xO7w7Kbymziy9CQNWvIpRkjzp/Rk8F1AWPS4NFT1Uivm7ilGSOjIdMzxff6i8qt5KvLoLGb1etyI8pBagPNmAybxgHy27rp7mO1rPBbwGW0I8LyUdPBxQuzx4VC8806glPKGWjTwtHY68XXcZO99I9LzoXRW8RgodvBSogjtIEqy8No1Mu0bim7sbELI8FKiCO30E0ryTqUi8JSB1vE0iSjyL8Ra81/C9O9jgxLsUSAc8cRQBO7ijjrqR8Ts7ZC9LuwBbHTx93FC7CzPmPExaxDunXri8ZR/SuhzYN7z2BXM7OB3YvJoReDyJmQU7xvvpPHh8sLtkB0o7rQ5bPKS2JLtRuuQ8IWDZvMDTwzu4e408l5FlvCx9CbyjTho8tzuEvEaSGb0c2Lc7itmOvCEA3ryVYVW8/YoIvaP+l7xaH4i4W5cLuwCTlzwGq0Q7ztABvTT9wLt8FMs3XU+YPP2KCLxyzI076U0cPFkHgLzrVau80RiavEiaKL5b5w0909Cmu19/qLy6u5Y87b01PLybpDzb2Fo88GXJvOA4e7qMuZw8IlDgvNIIobx7TMW8RNoMvFXae7xjx0C8cgQIPDzdcz1D6gU8icGGPJNxzrwNw3E8gcRtvMTLWTtQyl28N6VUO8/oCT3u/b68mNFuOzNttbxOEtE79gVzOkJyAjyOwau75lWGu9doQTuNWaG8NwVQvLezhzxff6g8HRhBPJWJ1juNWaG8iZkFusW74LvEG1w8txODPKVWKT0USIe72ajKPGjH5bynXjg87M2uPK5O5LqDpHs8GnAtvFrPhTzeWG28Yk89PC29EjxZL4G8BoPDPHrUwbs0hb268h3WOqy+WDuM4R08SdoxvEtCPDxTmvK7FtgSvdx4Xzv0dee8x0tsPEoqNDy8myS9LjWWPDV1xLygfgW8TsLOu+wdsTozlba8vJskvAEjo7xKojc8mDHqOwEjozsw7SK8SYqvvNn4zDxkB0q8Siq0vDa1zbvEG9w7vHMjPAfTxTtsN/85a/f1uzgd2DvgiP07CQNWvDzdczxDEgc8IMBUPMMD1DuPYTA8jumsPB4wybtkL8s6rQ5bvGTfyDwCm6Y8Q+oFPSJQ4LsAkxc8Ok1ovIP0fbygzoc8QxKHO7ezBz3eqO+56A0TvdU4Mbu4e4271qA7u5HJur3WKDi8dsQjO58ugzsypS88ursWPXWsG7upFsU8pS4oPNYouDtiT728ciwJvFvnjbxaHwi8FmCPPP4CjLyyvn08Vdr7vB+ozLxk38g896V3vI7Bq7wDAzE72LjDvGRnxbv3RXw76XWdvI2BIjwIE888AnOlOeCIfby9iyu8AwMxPMEjRrxgz6q7qNa7t4yRG70rPYA8dfwdPIj5AL0f0E08IEjRu9x4Xzv2tfC85qUIPC8lnTsDi628tusBPMnLfrxPKtm6icEGvUQCjryv7ui8ufOQO9WIszzgiP264Oj4O0tCvDu9s6y7dUwgvKiuurmqBsy7Snq2vO1tszzltYG7u6udO6SOo7wmYP47L3WfO68+a7v/8hK8S/I5PbdjBTzwjUo8f+TfvHVMoLs3fdO8C+PjvBs4MzyWoV677KUtvAGrn7yJEYm7gcTtu3lENrsrBYY8N1VSu4tBGTscALm7wuvLvLcTAzyjxp08aCfhupqxfDzWALc7pLaktxNYgLzEy1m70NgQPbzrpjvZqEq8t4sGvJdBY72QUbc8Ms2wvDXtx7yBdGs7l0FjvCTga7v8woK7icGGvDJFtLt3LK47MMUhvAH7oTtx3IY7dmQove9NQTxKKjQ9MS0sPMITTTwIi1I8D6N/vK/uaLxcXxE8M221PN0Y5DtnN9o6O53qvDLNsDzHS2w7UvrtvNZ4ujzFW+W8jOEdPCAQ1zywLnK8cqQMPL/jPDt1rBs9clSKvBs4M7mihpS8uCuLvOdtjjx6rMC5DCNtvEQqjzinXri56F0VO+rtID2lLqg7FXAIvFz/lTxOElG87PWvu0PChDqMaZq8fBRLvGtH+Dum9q060LAPvSBIUT23iwY8vWOqPNAoEzxg96s7LZWRO9fIPLzQiA68krnBu1rPBb0szYu8MzW7ux8gULxc15Q7LfWMPJ/egDxzlBM7YJcwO4kRCb3mzYk8Y+/BPNT4pzwjkGm8ojaSPBgYHD0PA3s8RAIOu87QgbtN0se6TOJAPJkh8bz+Agy72ahKvP/KET3mVQa6Bbu9u8iLdTt7xMg4W5eLPBQghjxfL6Y3ZafOOVJaabyMCZ+8GTAkvEPCBLyJEYm850UNvakWRTx11Bw9ob4OPHcErTvdGOQ8H9BNO3VMoLxUOne7NwVQPJWxV7wBqx+9uAMKPQZbwjrBm8k8Q8IEPeA4+7wZgKY8xvvpOzBlJjm4ow68UbrkOzhtWrw2Zcu7W2+KPIvxlrz0dWe8ulsbvMRr3rrXyDy8JDDuPFlXAjvceF89igGQOgmz07sv1Zo8fkRbPI7BqzyaYfo8kUE+Otx437xrR/i7o/6XPP8aFDy+ozO8arfsvEVCl7wX8Bq7FoiQvBfwmrn/8pK7AnOlPBWYiTweMMk7ASOjPFxfkbssfYm8qCY+uwSjNTxSWmk87M2uu/9CFb2nlrI8DCPtu9oQ1bySgce5gcRtPBhAHbvmVYa8FEiHO/4CDDzUcCu7zvgCu/OF4Lxdx5u8kllGvU0iSjvPmIc80VAUvAvjYzy5a5S7", + } + ], + "model": "text-embedding-ada-002-v2", + "usage": {"prompt_tokens": 5, "total_tokens": 5}, + }, + ], +} + + +@pytest.fixture(scope="session") +def simple_get(openai_version, extract_shortened_prompt): + def _simple_get(self): + content_len = int(self.headers.get("content-length")) + content = json.loads(self.rfile.read(content_len).decode("utf-8")) + + prompt = extract_shortened_prompt(content) + if not prompt: + self.send_response(500) + self.end_headers() + self.wfile.write("Could not parse prompt.".encode("utf-8")) + return + + headers, response = ({}, "") + + mocked_responses = RESPONSES_V1 + + for k, v in mocked_responses.items(): + if prompt.startswith(k): + headers, status_code, response = v + break + else: # If no matches found + self.send_response(500) + self.end_headers() + self.wfile.write(("Unknown Prompt:\n%s" % prompt).encode("utf-8")) + return + + # Send response code + self.send_response(status_code) + + # Send headers + for k, v in headers.items(): + self.send_header(k, v) + self.end_headers() + + # Send response body + self.wfile.write(json.dumps(response).encode("utf-8")) + return + + return _simple_get + + +@pytest.fixture(scope="session") +def MockExternalOpenAIServer(simple_get): + class _MockExternalOpenAIServer(MockExternalHTTPServer): + # To use this class in a test one needs to start and stop this server + # before and after making requests to the test app that makes the external + # calls. + + def __init__(self, handler=simple_get, port=None, *args, **kwargs): + super(_MockExternalOpenAIServer, self).__init__(handler=handler, port=port, *args, **kwargs) + + return _MockExternalOpenAIServer + + +@pytest.fixture(scope="session") +def extract_shortened_prompt(openai_version): + def _extract_shortened_prompt(content): + _input = content.get("input", None) + prompt = (_input and str(_input[0][0])) or content.get("messages")[0]["content"] + return prompt + + return _extract_shortened_prompt + + +def get_openai_version(): + # Import OpenAI so that get package version can catpure the version from the + # system module. OpenAI does not have a package version in v0. + import openai # noqa: F401; pylint: disable=W0611 + + return get_package_version_tuple("openai") + + +@pytest.fixture(scope="session") +def openai_version(): + return get_openai_version() + + +if __name__ == "__main__": + _MockExternalOpenAIServer = MockExternalOpenAIServer() + with MockExternalOpenAIServer() as server: + print("MockExternalOpenAIServer serving on port %s" % str(server.port)) + while True: + pass # Serve forever diff --git a/tests/mlmodel_langchain/conftest.py b/tests/mlmodel_langchain/conftest.py new file mode 100644 index 0000000000..32b4370b3e --- /dev/null +++ b/tests/mlmodel_langchain/conftest.py @@ -0,0 +1,157 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import pytest +from _mock_external_openai_server import ( # noqa: F401; pylint: disable=W0611 + MockExternalOpenAIServer, + extract_shortened_prompt, + get_openai_version, + openai_version, + simple_get, +) +from langchain_community.embeddings.openai import OpenAIEmbeddings +from testing_support.fixture.event_loop import ( # noqa: F401; pylint: disable=W0611 + event_loop as loop, +) +from testing_support.fixtures import ( # noqa: F401, pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) + +from newrelic.api.transaction import current_transaction +from newrelic.common.object_wrapper import wrap_function_wrapper + +_default_settings = { + "transaction_tracer.explain_threshold": 0.0, + "transaction_tracer.transaction_threshold": 0.0, + "transaction_tracer.stack_trace_threshold": 0.0, + "debug.log_data_collector_payloads": True, + "debug.record_transaction_failure": True, + "ml_insights_events.enabled": True, +} + +collector_agent_registration = collector_agent_registration_fixture( + app_name="Python Agent Test (mlmodel_langchain)", + default_settings=_default_settings, + linked_applications=["Python Agent Test (mlmodel_langchain)"], +) + + +OPENAI_AUDIT_LOG_FILE = os.path.join(os.path.realpath(os.path.dirname(__file__)), "openai_audit.log") +OPENAI_AUDIT_LOG_CONTENTS = {} +# Intercept outgoing requests and log to file for mocking +RECORDED_HEADERS = set(["x-request-id", "content-type"]) + + +@pytest.fixture(scope="session") +def openai_clients(openai_version, MockExternalOpenAIServer): # noqa: F811 + """ + This configures the openai client and returns it for openai v1 and only configures + openai for v0 since there is no client. + """ + from newrelic.core.config import _environ_as_bool + + if not _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): + with MockExternalOpenAIServer() as server: + yield OpenAIEmbeddings( + openai_api_key="NOT-A-REAL-SECRET", openai_api_base="http://localhost:%d" % server.port + ) + else: + openai_api_key = os.environ.get("OPENAI_API_KEY") + if not openai_api_key: + raise RuntimeError("OPENAI_API_KEY environment variable required.") + + yield OpenAIEmbeddings(openai_api_key=openai_api_key) + + +@pytest.fixture(scope="session") +def embeding_openai_client(openai_clients): + embedding_client = openai_clients + return embedding_client + + +@pytest.fixture +def set_trace_info(): + def set_info(): + txn = current_transaction() + if txn: + txn.guid = "transaction-id" + txn._trace_id = "trace-id" + + return set_info + + +@pytest.fixture(autouse=True, scope="session") +def openai_server( + openai_version, # noqa: F811 + openai_clients, + wrap_httpx_client_send, +): + """ + This fixture will either create a mocked backend for testing purposes, or will + set up an audit log file to log responses of the real OpenAI backend to a file. + The behavior can be controlled by setting NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES=1 as + an environment variable to run using the real OpenAI backend. (Default: mocking) + """ + from newrelic.core.config import _environ_as_bool + + if _environ_as_bool("NEW_RELIC_TESTING_RECORD_OPENAI_RESPONSES", False): + wrap_function_wrapper("httpx._client", "Client.send", wrap_httpx_client_send) + yield # Run tests + # Write responses to audit log + with open(OPENAI_AUDIT_LOG_FILE, "w") as audit_log_fp: + json.dump(OPENAI_AUDIT_LOG_CONTENTS, fp=audit_log_fp, indent=4) + else: + # We are mocking openai responses so we don't need to do anything in this case. + yield + + +def bind_send_params(request, *, stream=False, **kwargs): + return request + + +@pytest.fixture(scope="session") +def wrap_httpx_client_send(extract_shortened_prompt): # noqa: F811 + def _wrap_httpx_client_send(wrapped, instance, args, kwargs): + request = bind_send_params(*args, **kwargs) + if not request: + return wrapped(*args, **kwargs) + + params = json.loads(request.content.decode("utf-8")) + prompt = extract_shortened_prompt(params) + + # Send request + response = wrapped(*args, **kwargs) + + if response.status_code >= 400 or response.status_code < 200: + prompt = "error" + + rheaders = getattr(response, "headers") + + headers = dict( + filter( + lambda k: k[0].lower() in RECORDED_HEADERS + or k[0].lower().startswith("openai") + or k[0].lower().startswith("x-ratelimit"), + rheaders.items(), + ) + ) + body = json.loads(response.content.decode("utf-8")) + OPENAI_AUDIT_LOG_CONTENTS[prompt] = headers, response.status_code, body # Append response data to log + return response + + return _wrap_httpx_client_send diff --git a/tests/mlmodel_langchain/hello.pdf b/tests/mlmodel_langchain/hello.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4eb6f2ac534a771519a87903743153a0932ad4f0 GIT binary patch literal 3991 zcmc&%3pAAL8XnhSa!;ac{WX$=nSaKZ8MovfHse<0(#)9OB;z(Su2YKCMkJvWN{M94 zJ(n&niI5PLTU10TrO2ghIy2erQ>WJ2XPvdqIkVQkzIp%o=Ka6td7tn5pLZk4#7qO! z#344;&RtkSU;s4WPu+{q(*sb}-hL2;=^YNy0FWC2SR#=-S^%y99!&s13?6U?P_}?J zfHDQJc#u2V@FqI)0vupqfPnmHyj;Nl;e)__1H>^b00L0X_K+7B2INO)?*VW)0A=RQ zU_(sq$e^$x6Uft_#+|i1k|<1H7QiFH*q;&P>&F7L02GM{(Y!s`{!A`*E<6CWlS>5% zV7M2Ti0ytfD3p7H8xoddFMj~!EyEr_+57vmxhcMUci=J#V9;Ok1uql#13{?mCV;CR zMD+qGo>&}(27*)^28+gGAqe!Mp(#|V7Y;);(DQ_7IEaYB;5;D+izi?)Am{~pVu*MO zNF!3UXa?@QI1EjZHI%rL4%>9!@jV#ot$D7rY z(!*oZ@5*#-PoCYRW7+ZUzlt*|bfMx_~%?T)f3n&P9P$iQN^tdVOCsF>OB*P5e8e9fGLr zF9)OXb`vwMYsm~xSAG7s!rA!DE{5mD!6U7>pBUfVi#vpG*I zw~CUg#46VNVn(Hl_9;f+!ifGR+q22|j_HgW1i5vI&j@Ezb{C)9>NSih>zXTr{L)*b zjafO4-WntanZw()G{L|wox|k@N~3wJx?jzY{<25QX*@J0#RnAg->3HoX=`Y7)1zjy z&%?FUI^v8&ZyZ7Fj!cP@GonJy^>##qQ`M*YS(#onDT8z`;;DyD$xw51sLF4ZJ%j>i zR8wUO63Ou?@TUhWr0I1hA%;c@iZLqfZO;|b_yruAIkX3@j4@(viUz(?iv>9)qpogUHZUs5I}l!S~|7s<-yyqLV(=N?L~tao}t zRljivEYnPWIHvW--TvB0qoKeyTElY;)up$rcSWzqgcD=yjwqsRDa71J76qq=OLQjI z>WSCNtX{oS2A}7gJukml*f;TJ+AYO?AuC8tG@=|fR&P{l=DH=QP%c(~=1N)lhl7Uf z%o9ihAKgWj1U1o;d|JVAlP%gg=V=!gHN~4WtUG4TX0t@|5{me0lKm`ocf3^}l}Jhz zd3~^G@Kr`!d>3-R1CsUpYRT}*U)V1KpBCNsIo`K6@;A;l)V{%FtsAh{Mczf0uH*}a zIp71*q(9?>Lx8_PFoJ@q*nr=x_;|T0JY#$N6MvXiycKg#f zf7%7($$YZcK##uNRF7PF`}gD1Hr@x-WIWLm^PoYyoMUGNvT;OkCO~hG2bD0QJO9{f)auc#Pui zJAxXfv9aVk8u_-4A2SE|%#{Wt1UKEAk*HQrTZxT&lYc1xxDFo zQMc9bX!LBkdCt9oQOU$^bh2~CwK>ytSbKEdr`(m|!TNYo&tOl;j`oG{D#XxAy2%at zW~-79C(7+M$wZio7bgt}hBd|4c6(fbXVtV=DcJR>5DFO+f<~lunXtta>rt!G<5eu`w&cw`G;cLY*$PQJ=#On4Nf)5$G4`-a11JNuF3hI zaxyfVL&|4<&x@Yl;ISxA+7u^>m*EH)v4Bn)gouOt-XPk`jLt4a*=UlL#ZCMCUp>O@ z=is@oeV_bzB7^9^JCgr=gJZz&UVT1f7{A#nl_N(4j|dA3!6be8RezG4=Ph{lz?D0q z0d1~D#S<}rJ8vfMHaGsKeOUGvw$!~2Y~Q~Qe4IAdBmLA^*l@iJ_ZKFM>tXTVEiJ)e zz^~s>YJbPhHrH*bBZD*gamTAtgg5$1bw-(l2E1!3p-t|Z)rqQ|jQ2h9AtzsGVq(6n zok*8^3ln(DznZ@azM(EtvM}*FOm7pvQq*dlI;4nxa1dw3Z25_jmhv-=X1B<}$h3g8 zr|}@3DR8s|HLoyBaIc32Gm#-#YaO(OMI!Fq#?kSPk#$f`@|dc0nB`C@CTz3KJ1r@8 zZbQSRXHsU_71PRvqi$OSWAyroRfJiBALx@7p)u9otak=9+x_NUu2uILuiOARbj zZ?}@{#5wlOWFdHQ?d5D6Ols6(HFZ!gowd1s=qakf^s~JJ|Hf!vN;>@d=({Hwow1FK z{e=nYa<5DUob-3Z7UV5H?NZurxNk@LE3rd?7tW0ypHG1_SD26&?u%^n-420 @@ -344,7 +348,15 @@ deps = framework_tornado-tornadomaster: https://github.com/tornadoweb/tornado/archive/master.zip mlmodel_openai-openai0: openai[datalib]<1.0 mlmodel_openai-openailatest: openai[datalib] + ; Required for testing mlmodel_openai: protobuf + mlmodel_langchain: langchain + mlmodel_langchain: langchain-community + mlmodel_langchain: openai[datalib] + ; Required for testing + mlmodel_langchain: pypdf + mlmodel_langchain: tiktoken + mlmodel_langchain: faiss-cpu logger_loguru-logurulatest: loguru logger_loguru-loguru06: loguru<0.7 logger_loguru-loguru05: loguru<0.6 @@ -464,6 +476,7 @@ changedir = framework_strawberry: tests/framework_strawberry framework_tornado: tests/framework_tornado mlmodel_openai: tests/mlmodel_openai + mlmodel_langchain: tests/mlmodel_langchain logger_logging: tests/logger_logging logger_loguru: tests/logger_loguru logger_structlog: tests/logger_structlog From 3d3aa4fe8f5927b1fb3e16d79c3bc98735307126 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 20 Dec 2023 16:14:34 -0800 Subject: [PATCH 018/199] Prefix conversation id with llm (#1012) * Change conversation_id->llm.conversation_id * Fixup formatting --- newrelic/hooks/external_botocore.py | 65 +++++++++++++++---- newrelic/hooks/mlmodel_openai.py | 4 +- .../test_bedrock_chat_completion.py | 9 ++- tests/mlmodel_openai/test_chat_completion.py | 6 +- .../test_chat_completion_error.py | 12 ++-- .../test_chat_completion_error_v1.py | 8 +-- .../mlmodel_openai/test_chat_completion_v1.py | 6 +- .../test_get_llm_message_ids.py | 10 ++- .../test_get_llm_message_ids_v1.py | 4 +- 9 files changed, 85 insertions(+), 39 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 561d9011f8..ca63991af6 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -158,7 +158,6 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): input_message_list = [{"role": "user", "content": request_body.get("inputText", "")}] - chat_completion_summary_dict = { "request.max_tokens": request_config.get("maxTokenCount", ""), "request.temperature": request_config.get("temperature", ""), @@ -170,7 +169,9 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): completion_tokens = sum(result["tokenCount"] for result in response_body.get("results", [])) total_tokens = input_tokens + completion_tokens - output_message_list = [{"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", [])] + output_message_list = [ + {"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", []) + ] chat_completion_summary_dict.update( { @@ -218,7 +219,9 @@ def extract_bedrock_ai21_j2_model(request_body, response_body=None): } if response_body: - output_message_list =[{"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", [])] + output_message_list = [ + {"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", []) + ] chat_completion_summary_dict.update( { @@ -275,7 +278,9 @@ def extract_bedrock_cohere_model(request_body, response_body=None): } if response_body: - output_message_list = [{"role": "assistant", "content": result["text"]} for result in response_body.get("generations", [])] + output_message_list = [ + {"role": "assistant", "content": result["text"]} for result in response_body.get("generations", []) + ] chat_completion_summary_dict.update( { "response.choices.finish_reason": response_body["generations"][0]["finish_reason"], @@ -377,13 +382,31 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): if operation == "embedding": # Only available embedding models handle_embedding_event( - instance, transaction, extractor, model, None, None, request_body, - ft.duration, True, trace_id, span_id + instance, + transaction, + extractor, + model, + None, + None, + request_body, + ft.duration, + True, + trace_id, + span_id ) else: handle_chat_completion_event( - instance, transaction, extractor, model, None, None, request_body, - ft.duration, True, trace_id, span_id + instance, + transaction, + extractor, + model, + None, + None, + request_body, + ft.duration, + True, + trace_id, + span_id ) finally: @@ -430,7 +453,17 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): def handle_embedding_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id + client, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + duration, + is_error, + trace_id, + span_id ): embedding_id = str(uuid.uuid4()) @@ -465,10 +498,20 @@ def handle_embedding_event( def handle_chat_completion_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id + client, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + duration, + is_error, + trace_id, + span_id ): custom_attrs_dict = transaction._custom_params - conversation_id = custom_attrs_dict.get("conversation_id", "") + conversation_id = custom_attrs_dict.get("llm.conversation_id", "") chat_completion_id = str(uuid.uuid4()) diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index babfaf8bab..8534502289 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -193,7 +193,7 @@ def wrap_chat_completion_sync(wrapped, instance, args, kwargs): # Get conversation ID off of the transaction custom_attrs_dict = transaction._custom_params - conversation_id = custom_attrs_dict.get("conversation_id", "") + conversation_id = custom_attrs_dict.get("llm.conversation_id", "") settings = transaction.settings if transaction.settings is not None else global_settings() app_name = settings.app_name @@ -650,7 +650,7 @@ async def wrap_chat_completion_async(wrapped, instance, args, kwargs): # Get conversation ID off of the transaction custom_attrs_dict = transaction._custom_params - conversation_id = custom_attrs_dict.get("conversation_id", "") + conversation_id = custom_attrs_dict.get("llm.conversation_id", "") settings = transaction.settings if transaction.settings is not None else global_settings() app_name = settings.app_name diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index efcc7cec05..2c4925a43b 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -23,7 +23,6 @@ chat_completion_expected_events, chat_completion_invalid_access_key_error_events, chat_completion_payload_templates, - chat_completion_invalid_access_key_error_events, ) from conftest import BOTOCORE_VERSION from testing_support.fixtures import ( @@ -128,7 +127,7 @@ def test_bedrock_chat_completion_in_txn_with_convo_id(set_trace_info, exercise_m @background_task(name="test_bedrock_chat_completion_in_txn_with_convo_id") def _test(): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) _test() @@ -160,7 +159,7 @@ def _test(): @reset_core_stats_engine() @validate_custom_event_count(count=0) def test_bedrock_chat_completion_outside_txn(set_trace_info, exercise_model): - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") exercise_model(prompt=_test_bedrock_chat_completion_prompt, temperature=0.7, max_tokens=100) @@ -237,7 +236,7 @@ def test_bedrock_chat_completion_error_invalid_model(bedrock_server, set_trace_i @background_task(name="test_bedrock_chat_completion_error_invalid_model") def _test(): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") with pytest.raises(_client_error): bedrock_server.invoke_model( body=b"{}", @@ -283,7 +282,7 @@ def _test(): with pytest.raises(_client_error): # not sure where this exception actually comes from set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") exercise_model(prompt="Invalid Token", temperature=0.7, max_tokens=100) _test() diff --git a/tests/mlmodel_openai/test_chat_completion.py b/tests/mlmodel_openai/test_chat_completion.py index e141e45e53..76017a22a8 100644 --- a/tests/mlmodel_openai/test_chat_completion.py +++ b/tests/mlmodel_openai/test_chat_completion.py @@ -146,7 +146,7 @@ @background_task() def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) @@ -272,7 +272,7 @@ def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info): @reset_core_stats_engine() @validate_custom_event_count(count=0) def test_openai_chat_completion_sync_outside_txn(): - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) @@ -335,7 +335,7 @@ def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info @background_task() def test_openai_chat_completion_async_conversation_id_set(loop, set_trace_info): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( openai.ChatCompletion.acreate( diff --git a/tests/mlmodel_openai/test_chat_completion_error.py b/tests/mlmodel_openai/test_chat_completion_error.py index fe298c02bb..a8d3bdc512 100644 --- a/tests/mlmodel_openai/test_chat_completion_error.py +++ b/tests/mlmodel_openai/test_chat_completion_error.py @@ -131,7 +131,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info): with pytest.raises(openai.InvalidRequestError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") openai.ChatCompletion.create( # no model provided, messages=_test_openai_chat_completion_messages, @@ -215,7 +215,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info): def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): with pytest.raises(openai.InvalidRequestError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") openai.ChatCompletion.create( model="does-not-exist", messages=({"role": "user", "content": "Model does not exist."},), @@ -315,7 +315,7 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info): def test_chat_completion_authentication_error(monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None openai.ChatCompletion.create( model="gpt-3.5-turbo", @@ -439,7 +439,7 @@ def test_chat_completion_wrong_api_key_error(monkeypatch, set_trace_info): def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( openai.ChatCompletion.acreate( # no model provided, @@ -481,7 +481,7 @@ def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_in def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_trace_info): with pytest.raises(openai.InvalidRequestError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( openai.ChatCompletion.acreate( model="does-not-exist", @@ -520,7 +520,7 @@ def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_tra def test_chat_completion_authentication_error_async(loop, monkeypatch, set_trace_info): with pytest.raises(openai.error.AuthenticationError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") monkeypatch.setattr(openai, "api_key", None) # openai.api_key = None loop.run_until_complete( openai.ChatCompletion.acreate( diff --git a/tests/mlmodel_openai/test_chat_completion_error_v1.py b/tests/mlmodel_openai/test_chat_completion_error_v1.py index 70dc58f998..670689c929 100644 --- a/tests/mlmodel_openai/test_chat_completion_error_v1.py +++ b/tests/mlmodel_openai/test_chat_completion_error_v1.py @@ -127,7 +127,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info, sync_openai_client): with pytest.raises(TypeError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") sync_openai_client.chat.completions.create( messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) @@ -160,7 +160,7 @@ def test_chat_completion_invalid_request_error_no_model(set_trace_info, sync_ope def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_info, async_openai_client): with pytest.raises(TypeError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( async_openai_client.chat.completions.create( messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 @@ -242,7 +242,7 @@ def test_chat_completion_invalid_request_error_no_model_async(loop, set_trace_in def test_chat_completion_invalid_request_error_invalid_model(set_trace_info, sync_openai_client): with pytest.raises(openai.NotFoundError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") sync_openai_client.chat.completions.create( model="does-not-exist", messages=({"role": "user", "content": "Model does not exist."},), @@ -281,7 +281,7 @@ def test_chat_completion_invalid_request_error_invalid_model(set_trace_info, syn def test_chat_completion_invalid_request_error_invalid_model_async(loop, set_trace_info, async_openai_client): with pytest.raises(openai.NotFoundError): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( async_openai_client.chat.completions.create( model="does-not-exist", diff --git a/tests/mlmodel_openai/test_chat_completion_v1.py b/tests/mlmodel_openai/test_chat_completion_v1.py index 4df977a6c2..b1b35826c9 100644 --- a/tests/mlmodel_openai/test_chat_completion_v1.py +++ b/tests/mlmodel_openai/test_chat_completion_v1.py @@ -146,7 +146,7 @@ @background_task() def test_openai_chat_completion_sync_in_txn_with_convo_id(set_trace_info, sync_openai_client): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") sync_openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) @@ -272,7 +272,7 @@ def test_openai_chat_completion_sync_in_txn_no_convo_id(set_trace_info, sync_ope @reset_core_stats_engine() @validate_custom_event_count(count=0) def test_openai_chat_completion_sync_outside_txn(sync_openai_client): - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") sync_openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages, temperature=0.7, max_tokens=100 ) @@ -335,7 +335,7 @@ def test_openai_chat_completion_async_conversation_id_unset(loop, set_trace_info @background_task() def test_openai_chat_completion_async_conversation_id_set(loop, set_trace_info, async_openai_client): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") loop.run_until_complete( async_openai_client.chat.completions.create( diff --git a/tests/mlmodel_openai/test_get_llm_message_ids.py b/tests/mlmodel_openai/test_get_llm_message_ids.py index af073f7300..8489f4f3d3 100644 --- a/tests/mlmodel_openai/test_get_llm_message_ids.py +++ b/tests/mlmodel_openai/test_get_llm_message_ids.py @@ -13,10 +13,14 @@ # limitations under the License. import openai +from testing_support.fixtures import ( + reset_core_stats_engine, + validate_custom_event_count, +) + from newrelic.api.background_task import background_task from newrelic.api.ml_model import get_llm_message_ids, record_llm_feedback_event from newrelic.api.transaction import add_custom_attribute, current_transaction -from testing_support.fixtures import reset_core_stats_engine, validate_custom_event_count _test_openai_chat_completion_messages_1 = ( {"role": "system", "content": "You are a scientist."}, @@ -114,7 +118,7 @@ def test_get_llm_message_ids_outside_transaction(): @background_task() def test_get_llm_message_ids_mulitple_async(loop, set_trace_info): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") async def _run(): res1 = await openai.ChatCompletion.acreate( @@ -172,7 +176,7 @@ async def _run(): @background_task() def test_get_llm_message_ids_mulitple_sync(set_trace_info): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") results = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 diff --git a/tests/mlmodel_openai/test_get_llm_message_ids_v1.py b/tests/mlmodel_openai/test_get_llm_message_ids_v1.py index f85a26c2a9..094ddcd5a7 100644 --- a/tests/mlmodel_openai/test_get_llm_message_ids_v1.py +++ b/tests/mlmodel_openai/test_get_llm_message_ids_v1.py @@ -116,7 +116,7 @@ def test_get_llm_message_ids_outside_transaction(): @background_task() def test_get_llm_message_ids_mulitple_async(loop, set_trace_info, async_openai_client): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") async def _run(): res1 = await async_openai_client.chat.completions.create( @@ -174,7 +174,7 @@ async def _run(): @background_task() def test_get_llm_message_ids_mulitple_sync(set_trace_info, sync_openai_client): set_trace_info() - add_custom_attribute("conversation_id", "my-awesome-id") + add_custom_attribute("llm.conversation_id", "my-awesome-id") results = sync_openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=_test_openai_chat_completion_messages_1, temperature=0.7, max_tokens=100 From 7051455f076d2f617719d9bf6ddbca6989fa0afd Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Thu, 21 Dec 2023 10:38:39 -0800 Subject: [PATCH 019/199] Add support for Meta Llama2. (#1010) * Add support for Llama2. * Fixup: lint errors * [Mega-Linter] Apply linters fixes * Trigger tests --------- Co-authored-by: Hannah Stepanek Co-authored-by: hmstepanek --- newrelic/hooks/external_botocore.py | 51 ++++++-- newrelic/hooks/mlmodel_openai.py | 2 +- .../_mock_external_bedrock_server.py | 21 +++- .../_test_bedrock_chat_completion.py | 115 ++++++++++++++++++ .../test_bedrock_chat_completion.py | 1 + 5 files changed, 180 insertions(+), 10 deletions(-) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index ca63991af6..6e3be661bd 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -144,7 +144,7 @@ def create_chat_completion_message_event( "response.model": request_model, "vendor": "bedrock", "ingest_source": "Python", - "is_response": True + "is_response": True, } transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) @@ -246,7 +246,7 @@ def extract_bedrock_claude_model(request_body, response_body=None): chat_completion_summary_dict = { "request.max_tokens": request_body.get("max_tokens_to_sample", ""), "request.temperature": request_body.get("temperature", ""), - "response.number_of_messages": len(input_message_list) + "response.number_of_messages": len(input_message_list), } if response_body: @@ -264,6 +264,40 @@ def extract_bedrock_claude_model(request_body, response_body=None): return input_message_list, output_message_list, chat_completion_summary_dict +def extract_bedrock_llama_model(request_body, response_body=None): + request_body = json.loads(request_body) + if response_body: + response_body = json.loads(response_body) + + input_message_list = [{"role": "user", "content": request_body.get("prompt", "")}] + + chat_completion_summary_dict = { + "request.max_tokens": request_body.get("max_gen_len", ""), + "request.temperature": request_body.get("temperature", ""), + "response.number_of_messages": len(input_message_list), + } + + if response_body: + output_message_list = [{"role": "assistant", "content": response_body.get("generation", "")}] + prompt_tokens = response_body.get("prompt_token_count", None) + completion_tokens = response_body.get("generation_token_count", None) + total_tokens = prompt_tokens + completion_tokens if prompt_tokens and completion_tokens else None + + chat_completion_summary_dict.update( + { + "response.usage.completion_tokens": completion_tokens, + "response.usage.prompt_tokens": prompt_tokens, + "response.usage.total_tokens": total_tokens, + "response.choices.finish_reason": response_body.get("stop_reason", ""), + "response.number_of_messages": len(input_message_list) + len(output_message_list), + } + ) + else: + output_message_list = [] + + return input_message_list, output_message_list, chat_completion_summary_dict + + def extract_bedrock_cohere_model(request_body, response_body=None): request_body = json.loads(request_body) if response_body: @@ -274,7 +308,7 @@ def extract_bedrock_cohere_model(request_body, response_body=None): chat_completion_summary_dict = { "request.max_tokens": request_body.get("max_tokens", ""), "request.temperature": request_body.get("temperature", ""), - "response.number_of_messages": len(input_message_list) + "response.number_of_messages": len(input_message_list), } if response_body: @@ -300,6 +334,7 @@ def extract_bedrock_cohere_model(request_body, response_body=None): ("ai21.j2", extract_bedrock_ai21_j2_model), ("cohere", extract_bedrock_cohere_model), ("anthropic.claude", extract_bedrock_claude_model), + ("meta.llama2", extract_bedrock_llama_model), ] @@ -368,7 +403,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): notice_error_attributes = { "http.statusCode": error_attributes["http.statusCode"], "error.message": error_attributes["error.message"], - "error.code": error_attributes["error.code"] + "error.code": error_attributes["error.code"], } if is_embedding: @@ -392,7 +427,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): ft.duration, True, trace_id, - span_id + span_id, ) else: handle_chat_completion_event( @@ -406,7 +441,7 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): ft.duration, True, trace_id, - span_id + span_id, ) finally: @@ -463,7 +498,7 @@ def handle_embedding_event( duration, is_error, trace_id, - span_id + span_id, ): embedding_id = str(uuid.uuid4()) @@ -508,7 +543,7 @@ def handle_chat_completion_event( duration, is_error, trace_id, - span_id + span_id, ): custom_attrs_dict = transaction._custom_params conversation_id = custom_attrs_dict.get("llm.conversation_id", "") diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 8534502289..94b0b954c5 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -864,7 +864,7 @@ def wrap_base_client_process_response(wrapped, instance, args, kwargs): nr_response_headers = getattr(response, "headers") return_val = wrapped(*args, **kwargs) - # Obtain reponse headers for v1 + # Obtain response headers for v1 return_val._nr_response_headers = nr_response_headers return return_val diff --git a/tests/external_botocore/_mock_external_bedrock_server.py b/tests/external_botocore/_mock_external_bedrock_server.py index da5ff68dd9..609e7afa93 100644 --- a/tests/external_botocore/_mock_external_bedrock_server.py +++ b/tests/external_botocore/_mock_external_bedrock_server.py @@ -3332,6 +3332,16 @@ "prompt": "What is 212 degrees Fahrenheit converted to Celsius?", }, ], + "meta.llama2-13b-chat-v1::What is 212 degrees Fahrenheit converted to Celsius?": [ + {"Content-Type": "application/json", "x-amzn-RequestId": "9a64cdb0-3e82-41c7-873a-c12a77e0143a"}, + 200, + { + "generation": " Here's the answer:\n\n212°F = 100°C\n\nSo, 212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "prompt_token_count": 17, + "generation_token_count": 46, + "stop_reason": "stop", + }, + ], "does-not-exist::": [ { "Content-Type": "application/json", @@ -3395,6 +3405,15 @@ 403, {"message": "The security token included in the request is invalid."}, ], + "meta.llama2-13b-chat-v1::Invalid Token": [ + { + "Content-Type": "application/json", + "x-amzn-RequestId": "22476490-a0d6-42db-b5ea-32d0b8a7f751", + "x-amzn-ErrorType": "UnrecognizedClientException:http://internal.amazon.com/coral/com.amazon.coral.service/", + }, + 403, + {"message": "The security token included in the request is invalid."}, + ], } MODEL_PATH_RE = re.compile(r"/model/([^/]+)/invoke") @@ -3454,7 +3473,7 @@ def __init__(self, handler=simple_get, port=None, *args, **kwargs): if __name__ == "__main__": # Use this to sort dict for easier future incremental updates print("RESPONSES = %s" % dict(sorted(RESPONSES.items(), key=lambda i: (i[1][1], i[0])))) - + with MockExternalBedrockServer() as server: print("MockExternalBedrockServer serving on port %s" % str(server.port)) while True: diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py index e3f53fd31f..f1d21c73c7 100644 --- a/tests/external_botocore/_test_bedrock_chat_completion.py +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -3,6 +3,7 @@ "ai21.j2-mid-v1": '{"prompt": "%s", "temperature": %f, "maxTokens": %d}', "anthropic.claude-instant-v1": '{"prompt": "Human: %s Assistant:", "temperature": %f, "max_tokens_to_sample": %d}', "cohere.command-text-v14": '{"prompt": "%s", "temperature": %f, "max_tokens": %d}', + "meta.llama2-13b-chat-v1": '{"prompt": "%s", "temperature": %f, "max_gen_len": %d}', } chat_completion_expected_events = { @@ -263,6 +264,72 @@ }, ), ], + "meta.llama2-13b-chat-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a", + "api_key_last_four_digits": "CRET", + "duration": None, # Response time varies each test run + "request.model": "meta.llama2-13b-chat-v1", + "response.model": "meta.llama2-13b-chat-v1", + "response.usage.prompt_tokens": 17, + "response.usage.completion_tokens": 46, + "response.usage.total_tokens": 63, + "request.temperature": 0.7, + "request.max_tokens": 100, + "response.choices.finish_reason": "stop", + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 2, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "What is 212 degrees Fahrenheit converted to Celsius?", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "meta.llama2-13b-chat-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": " Here's the answer:\n\n212°F = 100°C\n\nSo, 212 degrees Fahrenheit is equal to 100 degrees Celsius.", + "role": "assistant", + "completion_id": None, + "sequence": 1, + "response.model": "meta.llama2-13b-chat-v1", + "vendor": "bedrock", + "ingest_source": "Python", + "is_response": True, + }, + ), + ], } chat_completion_invalid_model_error_events = [ @@ -480,6 +547,49 @@ }, ), ], + "meta.llama2-13b-chat-v1": [ + ( + {"type": "LlmChatCompletionSummary"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "transaction_id": "transaction-id", + "span_id": None, + "trace_id": "trace-id", + "request_id": "", + "api_key_last_four_digits": "-KEY", + "duration": None, # Response time varies each test run + "request.model": "meta.llama2-13b-chat-v1", + "response.model": "meta.llama2-13b-chat-v1", + "request.temperature": 0.7, + "request.max_tokens": 100, + "vendor": "bedrock", + "ingest_source": "Python", + "response.number_of_messages": 1, + "error": True, + }, + ), + ( + {"type": "LlmChatCompletionMessage"}, + { + "id": None, # UUID that varies with each run + "appName": "Python Agent Test (external_botocore)", + "conversation_id": "my-awesome-id", + "request_id": "", + "span_id": None, + "trace_id": "trace-id", + "transaction_id": "transaction-id", + "content": "Invalid Token", + "role": "user", + "completion_id": None, + "sequence": 0, + "response.model": "meta.llama2-13b-chat-v1", + "vendor": "bedrock", + "ingest_source": "Python", + }, + ), + ], } chat_completion_expected_client_errors = { @@ -503,4 +613,9 @@ "error.message": "The security token included in the request is invalid.", "error.code": "UnrecognizedClientException", }, + "meta.llama2-13b-chat-v1": { + "http.statusCode": 403, + "error.message": "The security token included in the request is invalid.", + "error.code": "UnrecognizedClientException", + }, } diff --git a/tests/external_botocore/test_bedrock_chat_completion.py b/tests/external_botocore/test_bedrock_chat_completion.py index 2c4925a43b..c5c2a4706f 100644 --- a/tests/external_botocore/test_bedrock_chat_completion.py +++ b/tests/external_botocore/test_bedrock_chat_completion.py @@ -56,6 +56,7 @@ def is_file_payload(request): "ai21.j2-mid-v1", "anthropic.claude-instant-v1", "cohere.command-text-v14", + "meta.llama2-13b-chat-v1", ], ) def model_id(request): From dae9c79e4241917a8a01d891a14468542ecdc616 Mon Sep 17 00:00:00 2001 From: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Date: Fri, 29 Dec 2023 13:37:02 -0800 Subject: [PATCH 020/199] Instrumentation for asimilarity_search in Langchain (#1013) * Add asimilarity_search support * [Mega-Linter] Apply linters fixes * Trigger tests * Cleanup code --------- Co-authored-by: lrafeei --- newrelic/hooks/mlmodel_langchain.py | 85 ++++++++++++++++++++- tests/mlmodel_langchain/conftest.py | 4 +- tests/mlmodel_langchain/test_vectorstore.py | 71 +++++++++++++++-- 3 files changed, 149 insertions(+), 11 deletions(-) diff --git a/newrelic/hooks/mlmodel_langchain.py b/newrelic/hooks/mlmodel_langchain.py index 2b2e5d232d..2c501f27fe 100644 --- a/newrelic/hooks/mlmodel_langchain.py +++ b/newrelic/hooks/mlmodel_langchain.py @@ -96,6 +96,84 @@ } +def bind_asimilarity_search(query, k, *args, **kwargs): + return query, k + + +async def wrap_asimilarity_search(wrapped, instance, args, kwargs): + transaction = current_transaction() + if not transaction: + return await wrapped(*args, **kwargs) + + transaction.add_ml_model_info("Langchain", LANGCHAIN_VERSION) + + request_query, request_k = bind_asimilarity_search(*args, **kwargs) + function_name = callable_name(wrapped) + with FunctionTrace(name=function_name, group="Llm/vectorstore/Langchain") as ft: + try: + response = await wrapped(*args, **kwargs) + available_metadata = get_trace_linking_metadata() + except Exception as err: + # Error logic goes here + pass + + if not response: + return response # Should always be None + + # LLMVectorSearch + span_id = available_metadata.get("span.id", "") + trace_id = available_metadata.get("trace.id", "") + transaction_id = transaction.guid + id = str(uuid.uuid4()) + request_query, request_k = bind_similarity_search(*args, **kwargs) + duration = ft.duration + response_number_of_documents = len(response) + + # Only in LlmVectorSearch dict + LLMVectorSearch_dict = { + "request.query": request_query, + "request.k": request_k, + "duration": duration, + "response.number_of_documents": response_number_of_documents, + } + + # In both LlmVectorSearch and LlmVectorSearchResult dicts + LLMVectorSearch_union_dict = { + "span_id": span_id, + "trace_id": trace_id, + "transaction_id": transaction_id, + "id": id, + "vendor": "langchain", + "ingest_source": "Python", + "appName": transaction._application._name, + } + + LLMVectorSearch_dict.update(LLMVectorSearch_union_dict) + transaction.record_custom_event("LlmVectorSearch", LLMVectorSearch_dict) + + # LLMVectorSearchResult + for index, doc in enumerate(response): + search_id = str(uuid.uuid4()) + sequence = index + page_content = getattr(doc, "page_content", "") + metadata = getattr(doc, "metadata", "") + + metadata_dict = {"metadata.%s" % key: value for key, value in metadata.items()} + + LLMVectorSearchResult_dict = { + "search_id": search_id, + "sequence": sequence, + "page_content": page_content, + } + + LLMVectorSearchResult_dict.update(LLMVectorSearch_union_dict) + LLMVectorSearchResult_dict.update(metadata_dict) + + transaction.record_custom_event("LlmVectorSearchResult", LLMVectorSearchResult_dict) + + return response + + def bind_similarity_search(query, k, *args, **kwargs): return query, k @@ -105,9 +183,10 @@ def wrap_similarity_search(wrapped, instance, args, kwargs): if not transaction: return wrapped(*args, **kwargs) + transaction.add_ml_model_info("Langchain", LANGCHAIN_VERSION) request_query, request_k = bind_similarity_search(*args, **kwargs) function_name = callable_name(wrapped) - with FunctionTrace(name=function_name) as ft: + with FunctionTrace(name=function_name, group="Llm/vectorstore/Langchain") as ft: try: response = wrapped(*args, **kwargs) available_metadata = get_trace_linking_metadata() @@ -172,12 +251,14 @@ def wrap_similarity_search(wrapped, instance, args, kwargs): # LLMVectorSearchResult_dict |= metadata_dict transaction.record_custom_event("LlmVectorSearchResult", LLMVectorSearchResult_dict) - transaction.add_ml_model_info("Langchain", LANGCHAIN_VERSION) return response def instrument_langchain_vectorstore_similarity_search(module): vector_class = VECTORSTORE_CLASSES.get(module.__name__) + if vector_class and hasattr(getattr(module, vector_class, ""), "similarity_search"): wrap_function_wrapper(module, "%s.similarity_search" % vector_class, wrap_similarity_search) + if vector_class and hasattr(getattr(module, vector_class, ""), "asimilarity_search"): + wrap_function_wrapper(module, "%s.asimilarity_search" % vector_class, wrap_asimilarity_search) diff --git a/tests/mlmodel_langchain/conftest.py b/tests/mlmodel_langchain/conftest.py index 32b4370b3e..1b62070c9b 100644 --- a/tests/mlmodel_langchain/conftest.py +++ b/tests/mlmodel_langchain/conftest.py @@ -58,7 +58,7 @@ @pytest.fixture(scope="session") -def openai_clients(openai_version, MockExternalOpenAIServer): # noqa: F811 +def openai_clients(MockExternalOpenAIServer): # noqa: F811 """ This configures the openai client and returns it for openai v1 and only configures openai for v0 since there is no client. @@ -79,7 +79,7 @@ def openai_clients(openai_version, MockExternalOpenAIServer): # noqa: F811 @pytest.fixture(scope="session") -def embeding_openai_client(openai_clients): +def embedding_openai_client(openai_clients): embedding_client = openai_clients return embedding_client diff --git a/tests/mlmodel_langchain/test_vectorstore.py b/tests/mlmodel_langchain/test_vectorstore.py index 7fb03ec80b..50bada3589 100644 --- a/tests/mlmodel_langchain/test_vectorstore.py +++ b/tests/mlmodel_langchain/test_vectorstore.py @@ -75,23 +75,34 @@ ) +# Test to check if all classes containing "similarity_search" +# method are instrumented. Prints out anything that is not +# instrumented to identify when new vectorstores are added. def test_vectorstore_modules_instrumented(): from langchain_community import vectorstores vector_store_classes = tuple(vectorstores.__all__) - uninstrumented_classes = [] + uninstrumented_sync_classes = [] + uninstrumented_async_classes = [] for class_name in vector_store_classes: class_ = getattr(vectorstores, class_name) if ( not hasattr(class_, "similarity_search") or class_name in _test_vectorstore_modules_instrumented_ignored_classes ): + # If "similarity_search" is found, "asimilarity_search" will + # also be found, so separate logic is not necessary to check this. continue if not hasattr(getattr(class_, "similarity_search"), "__wrapped__"): - uninstrumented_classes.append(class_name) + uninstrumented_sync_classes.append(class_name) + if not hasattr(getattr(class_, "asimilarity_search"), "__wrapped__"): + uninstrumented_async_classes.append(class_name) - assert not uninstrumented_classes, "Uninstrumented classes found: %s" % str(uninstrumented_classes) + assert not uninstrumented_sync_classes, "Uninstrumented sync classes found: %s" % str(uninstrumented_sync_classes) + assert not uninstrumented_async_classes, "Uninstrumented async classes found: %s" % str( + uninstrumented_async_classes + ) @reset_core_stats_engine() @@ -106,27 +117,73 @@ def test_vectorstore_modules_instrumented(): background_task=True, ) @background_task() -def test_pdf_pagesplitter_vectorstore_in_txn(set_trace_info, embeding_openai_client): +def test_pdf_pagesplitter_vectorstore_in_txn(set_trace_info, embedding_openai_client): set_trace_info() script_dir = os.path.dirname(__file__) loader = PyPDFLoader(os.path.join(script_dir, "hello.pdf")) docs = loader.load() - faiss_index = FAISS.from_documents(docs, embeding_openai_client) + faiss_index = FAISS.from_documents(docs, embedding_openai_client) docs = faiss_index.similarity_search("Complete this sentence: Hello", k=1) assert "Hello world" in docs[0].page_content @reset_core_stats_engine() @validate_custom_event_count(count=0) -def test_pdf_pagesplitter_vectorstore_outside_txn(set_trace_info, embeding_openai_client): +def test_pdf_pagesplitter_vectorstore_outside_txn(set_trace_info, embedding_openai_client): set_trace_info() script_dir = os.path.dirname(__file__) loader = PyPDFLoader(os.path.join(script_dir, "hello.pdf")) docs = loader.load() - faiss_index = FAISS.from_documents(docs, embeding_openai_client) + faiss_index = FAISS.from_documents(docs, embedding_openai_client) docs = faiss_index.similarity_search("Complete this sentence: Hello", k=1) assert "Hello world" in docs[0].page_content + + +@reset_core_stats_engine() +@validate_custom_events(vectorstore_recorded_events) +# Two OpenAI LlmEmbedded, two LangChain LlmVectorSearch +@validate_custom_event_count(count=4) +@validate_transaction_metrics( + name="test_vectorstore:test_async_pdf_pagesplitter_vectorstore_in_txn", + custom_metrics=[ + ("Python/ML/Langchain/%s" % LANGCHAIN_VERSION, 1), + ], + background_task=True, +) +@background_task() +def test_async_pdf_pagesplitter_vectorstore_in_txn(loop, set_trace_info, embedding_openai_client): + async def _test(): + set_trace_info() + + script_dir = os.path.dirname(__file__) + loader = PyPDFLoader(os.path.join(script_dir, "hello.pdf")) + docs = loader.load() + + faiss_index = await FAISS.afrom_documents(docs, embedding_openai_client) + docs = await faiss_index.asimilarity_search("Complete this sentence: Hello", k=1) + return docs + + docs = loop.run_until_complete(_test()) + assert "Hello world" in docs[0].page_content + + +@reset_core_stats_engine() +@validate_custom_event_count(count=0) +def test_async_pdf_pagesplitter_vectorstore_outside_txn(loop, set_trace_info, embedding_openai_client): + async def _test(): + set_trace_info() + + script_dir = os.path.dirname(__file__) + loader = PyPDFLoader(os.path.join(script_dir, "hello.pdf")) + docs = loader.load() + + faiss_index = await FAISS.afrom_documents(docs, embedding_openai_client) + docs = await faiss_index.asimilarity_search("Complete this sentence: Hello", k=1) + return docs + + docs = loop.run_until_complete(_test()) + assert "Hello world" in docs[0].page_content From 7f062c4cb5f6b0393a6ef38142c76cbec18ebf99 Mon Sep 17 00:00:00 2001 From: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Date: Wed, 10 Jan 2024 15:17:14 -0800 Subject: [PATCH 021/199] Add bedrock feedback into preview (#1030) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Apply suggestions from code review * Remove unused import * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Initial feedback commit for botocore * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Bedrock feedback w/ testing for titan and jurassic models * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Fix merge conflicts * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Add to and move feedback tests * Handle 0.32.0.post1 version in tests (#963) * Remove response_id dependency in bedrock * Change API name * Update moto * Bedrock Error Tracing (#966) * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Drop python 3.7 tests for Hypercorn (#954) * Fix pyenv installation for devcontainer (#936) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove duplicate kafka import hook (#956) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Initial bedrock error tracing commit * Handle 0.32.0.post1 version in tests (#963) * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Change ids to match other tests * move message_ids declaration outside for loop * Add comment to tox.ini * Drop py27 from memcache testing. * Drop pypy27 from memcache testing. * Update flaskrestx testing #1004 * Remove tastypie 0.14.3 testing * Remove tastypie 0.14.3 testing * Remove python 3.12 support (for now) * Remove untouched files from diff list --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Hannah Stepanek --- newrelic/api/ml_model.py | 9 +- newrelic/hooks/external_botocore.py | 9 +- .../_test_bedrock_chat_completion.py | 87 +++++++++++++++++++ .../_test_bedrock_embeddings.py | 18 +++- .../test_bedrock_embeddings.py | 2 +- tox.ini | 23 +++-- 6 files changed, 129 insertions(+), 19 deletions(-) diff --git a/newrelic/api/ml_model.py b/newrelic/api/ml_model.py index 3d15cf8d37..03408253bc 100644 --- a/newrelic/api/ml_model.py +++ b/newrelic/api/ml_model.py @@ -40,12 +40,15 @@ def wrap_mlmodel(model, name=None, version=None, feature_names=None, label_names def get_llm_message_ids(response_id=None): transaction = current_transaction() - if response_id and transaction: + if transaction: nr_message_ids = getattr(transaction, "_nr_message_ids", {}) - message_id_info = nr_message_ids.pop(response_id, ()) + message_id_info = ( + nr_message_ids.pop("bedrock_key", ()) if not response_id else nr_message_ids.pop(response_id, ()) + ) if not message_id_info: - warnings.warn("No message ids found for %s" % response_id) + response_id_warning = "." if not response_id else " for %s." % response_id + warnings.warn("No message ids found%s" % response_id_warning) return [] conversation_id, request_id, ids = message_id_info diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 6e3be661bd..69a2fd9361 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -97,6 +97,7 @@ def create_chat_completion_message_event( if not transaction: return + message_ids = [] for index, message in enumerate(input_message_list): if response_id: id_ = "%s-%d" % (response_id, index) # Response ID was set, append message index to it. @@ -128,6 +129,7 @@ def create_chat_completion_message_event( id_ = "%s-%d" % (response_id, index) # Response ID was set, append message index to it. else: id_ = str(uuid.uuid4()) # No response IDs, use random UUID + message_ids.append(id_) chat_completion_message_dict = { "id": id_, @@ -147,6 +149,7 @@ def create_chat_completion_message_event( "is_response": True, } transaction.record_custom_event("LlmChatCompletionMessage", chat_completion_message_dict) + return (conversation_id, request_id, message_ids) def extract_bedrock_titan_text_model(request_body, response_body=None): @@ -577,7 +580,7 @@ def handle_chat_completion_event( transaction.record_custom_event("LlmChatCompletionSummary", chat_completion_summary_dict) - create_chat_completion_message_event( + message_ids = create_chat_completion_message_event( transaction=transaction, app_name=settings.app_name, input_message_list=input_message_list, @@ -591,6 +594,10 @@ def handle_chat_completion_event( response_id=response_id, ) + if not hasattr(transaction, "_nr_message_ids"): + transaction._nr_message_ids = {} + transaction._nr_message_ids["bedrock_key"] = message_ids + CUSTOM_TRACE_POINTS = { ("sns", "publish"): message_trace("SNS", "Produce", "Topic", extract(("TopicArn", "TargetArn"), "PhoneNumber")), diff --git a/tests/external_botocore/_test_bedrock_chat_completion.py b/tests/external_botocore/_test_bedrock_chat_completion.py index f1d21c73c7..652027719c 100644 --- a/tests/external_botocore/_test_bedrock_chat_completion.py +++ b/tests/external_botocore/_test_bedrock_chat_completion.py @@ -1,3 +1,17 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + chat_completion_payload_templates = { "amazon.titan-text-express-v1": '{ "inputText": "%s", "textGenerationConfig": {"temperature": %f, "maxTokenCount": %d }}', "ai21.j2-mid-v1": '{"prompt": "%s", "temperature": %f, "maxTokens": %d}', @@ -6,6 +20,79 @@ "meta.llama2-13b-chat-v1": '{"prompt": "%s", "temperature": %f, "max_gen_len": %d}', } +chat_completion_get_llm_message_ids = { + "amazon.titan-text-express-v1": { + "bedrock_key": [ + { + "conversation_id": "my-awesome-id", + "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", + "message_id": None, # UUID that varies with each run + }, + { + "conversation_id": "my-awesome-id", + "request_id": "03524118-8d77-430f-9e08-63b5c03a40cf", + "message_id": None, # UUID that varies with each run + }, + ] + }, + "ai21.j2-mid-v1": { + "bedrock_key": [ + { + "conversation_id": "my-awesome-id", + "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", + "message_id": "1234-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "c863d9fc-888b-421c-a175-ac5256baec62", + "message_id": "1234-1", + }, + ] + }, + "anthropic.claude-instant-v1": { + "bedrock_key": [ + { + "conversation_id": "my-awesome-id", + "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", + "message_id": None, # UUID that varies with each run + }, + { + "conversation_id": "my-awesome-id", + "request_id": "7b0b37c6-85fb-4664-8f5b-361ca7b1aa18", + "message_id": None, # UUID that varies with each run + }, + ] + }, + "cohere.command-text-v14": { + "bedrock_key": [ + { + "conversation_id": "my-awesome-id", + "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "message_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97", + "message_id": "e77422c8-fbbf-4e17-afeb-c758425c9f97-1", + }, + ] + }, + "meta.llama2-13b-chat-v1": { + "bedrock_key": [ + { + "conversation_id": "my-awesome-id", + "request_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a", + "message_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a-0", + }, + { + "conversation_id": "my-awesome-id", + "request_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a", + "message_id": "9a64cdb0-3e82-41c7-873a-c12a77e0143a-1", + }, + ] + }, +} + chat_completion_expected_events = { "amazon.titan-text-express-v1": [ ( diff --git a/tests/external_botocore/_test_bedrock_embeddings.py b/tests/external_botocore/_test_bedrock_embeddings.py index ec677b426c..05c8a390ca 100644 --- a/tests/external_botocore/_test_bedrock_embeddings.py +++ b/tests/external_botocore/_test_bedrock_embeddings.py @@ -1,3 +1,17 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + embedding_payload_templates = { "amazon.titan-embed-text-v1": '{ "inputText": "%s" }', "amazon.titan-embed-g1-text-02": '{ "inputText": "%s" }', @@ -68,7 +82,7 @@ "request_id": "", "vendor": "bedrock", "ingest_source": "Python", - "error": True + "error": True, }, ), ], @@ -89,7 +103,7 @@ "request_id": "", "vendor": "bedrock", "ingest_source": "Python", - "error": True + "error": True, }, ), ], diff --git a/tests/external_botocore/test_bedrock_embeddings.py b/tests/external_botocore/test_bedrock_embeddings.py index cc442fc158..9fc0164714 100644 --- a/tests/external_botocore/test_bedrock_embeddings.py +++ b/tests/external_botocore/test_bedrock_embeddings.py @@ -19,8 +19,8 @@ import pytest from _test_bedrock_embeddings import ( embedding_expected_client_errors, - embedding_expected_events, embedding_expected_error_events, + embedding_expected_events, embedding_payload_templates, ) from conftest import BOTOCORE_VERSION diff --git a/tox.ini b/tox.ini index a0827dea61..bc54db2b92 100644 --- a/tox.ini +++ b/tox.ini @@ -67,11 +67,11 @@ envlist = python-mlmodel_sklearn-{py37}-scikitlearn0101, python-component_djangorestframework-py27-djangorestframework0300, python-component_djangorestframework-{py37,py38,py39,py310,py311}-djangorestframeworklatest, - python-component_flask_rest-{py37,py38,py39,pypy38}-flaskrestxlatest, + python-component_flask_rest-py37-flaskrestx110, + python-component_flask_rest-{py38,py39,py310,py311,pypy38}-flaskrestxlatest, python-component_flask_rest-{py27,pypy27}-flaskrestx051, python-component_graphqlserver-{py37,py38,py39,py310,py311}, - python-component_tastypie-{py27,pypy27}-tastypie0143, - python-component_tastypie-{py37,py38,py39,pypy38}-tastypie{0143,latest}, + python-component_tastypie-{py37,py38,py39,pypy38}-tastypielatest, python-coroutines_asyncio-{py37,py38,py39,py310,py311,pypy38}, python-cross_agent-{py27,py37,py38,py39,py310,py311}-{with,without}_extensions, python-cross_agent-pypy27-without_extensions, @@ -79,7 +79,7 @@ envlist = memcached-datastore_bmemcached-{pypy27,py27,py37,py38,py39,py310,py311}-memcached030, elasticsearchserver07-datastore_elasticsearch-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}-elasticsearch07, elasticsearchserver08-datastore_elasticsearch-{py37,py38,py39,py310,py311,pypy38}-elasticsearch08, - memcached-datastore_memcache-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}-memcached01, + memcached-datastore_memcache-{py37,py38,py39,py310,py311,pypy38}-memcached01, mysql-datastore_mysql-mysql080023-py27, mysql-datastore_mysql-mysqllatest-{py37,py38,py39,py310,py311}, firestore-datastore_firestore-{py37,py38,py39,py310,py311}, @@ -177,7 +177,8 @@ deps = adapter_gunicorn-aiohttp3: aiohttp<4.0 adapter_gunicorn-gunicorn19: gunicorn<20 adapter_gunicorn-gunicornlatest: gunicorn - adapter_hypercorn-hypercornlatest: hypercorn + ; Temporarily pinned. Needs to be addressed + adapter_hypercorn-hypercornlatest: hypercorn<0.16 adapter_hypercorn-hypercorn0013: hypercorn<0.14 adapter_hypercorn-hypercorn0012: hypercorn<0.13 adapter_hypercorn-hypercorn0011: hypercorn<0.12 @@ -204,23 +205,21 @@ deps = component_djangorestframework-djangorestframework0300: djangorestframework<3.1 component_djangorestframework-djangorestframeworklatest: Django component_djangorestframework-djangorestframeworklatest: djangorestframework - component_flask_rest: flask component_flask_rest: flask-restful component_flask_rest: jinja2 component_flask_rest: itsdangerous + component_flask_rest-flaskrestxlatest: flask component_flask_rest-flaskrestxlatest: flask-restx - ; Pin Flask version until flask-restx is updated to support v3 - component_flask_rest-flaskrestxlatest: flask<3.0 + ; flask-restx only supports Flask v3 after flask-restx v1.3.0 + component_flask_rest-flaskrestx110: Flask<3.0 + component_flask_rest-flaskrestx110: flask-restx<1.2 + component_flask_rest-flaskrestx051: Flask<3.0 component_flask_rest-flaskrestx051: flask-restx<1.0 component_graphqlserver: graphql-server[sanic,flask]==3.0.0b5 component_graphqlserver: sanic>20 component_graphqlserver: Flask component_graphqlserver: markupsafe<2.1 component_graphqlserver: jinja2<3.1 - component_tastypie-tastypie0143: django-tastypie<0.14.4 - component_tastypie-{py27,pypy27}-tastypie0143: django<1.12 - component_tastypie-{py37,py38,py39,py310,py311,pypy38}-tastypie0143: django<3.0.1 - component_tastypie-{py37,py38,py39,py310,py311,pypy38}-tastypie0143: asgiref<3.7.1 # asgiref==3.7.1 only suppport Python 3.10+ component_tastypie-tastypielatest: django-tastypie component_tastypie-tastypielatest: django<4.1 component_tastypie-tastypielatest: asgiref<3.7.1 # asgiref==3.7.1 only suppport Python 3.10+ From 9b557755eb10e165d1ed027cf8e34f715a479c09 Mon Sep 17 00:00:00 2001 From: Hannah Stepanek Date: Wed, 17 Jan 2024 09:43:25 -0800 Subject: [PATCH 022/199] Merge main into preview 3 (#1032) * Fix botocore tests (#973) * Bedrock Testing Infrastructure (#937) * Add AWS Bedrock testing infrastructure * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Remove OpenAI references --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Bedrock Sync Chat Completion Instrumentation (#953) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Cache Package Version Lookups (#946) * Cache _get_package_version * Add Python 2.7 support to get_package_version caching * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino * Fix Redis Generator Methods (#947) * Fix scan_iter for redis * Replace generator methods * Update instance info instrumentation * Remove mistake from uninstrumented methods * Add skip condition to asyncio generator tests * Add skip condition to asyncio generator tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * TEMP * Automatic RPM System Updates (#948) * Checkout old action * Adding RPM action * Add dry run * Incorporating action into workflow * Wire secret into custom action * Enable action * Correct action name * Fix syntax * Fix quoting issues * Drop pre-verification. Does not work on python * Fix merge artifact * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Drop all openai refs * [Mega-Linter] Apply linters fixes * Adding response_id and response_model * Drop python 3.7 tests for Hypercorn (#954) * Apply suggestions from code review * Remove unused import --------- Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Feature bedrock cohere instrumentation (#955) * Add AWS Bedrock testing infrastructure * Squashed commit of the following: commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Squashed commit of the following: commit 182c7a8c8a91e2d0f234f7ed7d4a14a2422c8342 Author: Uma Annamalai Date: Fri Oct 13 10:12:55 2023 -0700 Add request/ response IDs. commit f6d13f822c22d2039ec32be86b2c54f9dc3de1c9 Author: Uma Annamalai Date: Thu Oct 12 13:23:39 2023 -0700 Test cleanup. commit d0576631d009e481bd5887a3243aac99b097d823 Author: Uma Annamalai Date: Tue Oct 10 10:23:00 2023 -0700 Remove commented code. commit dd29433e719482babbe5c724e7330b1f6324abd7 Author: Uma Annamalai Date: Tue Oct 10 10:19:01 2023 -0700 Add openai sync instrumentation. commit 2834663794c649124052e510c1c9557a830c060a Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Mon Oct 9 17:42:05 2023 -0700 OpenAI Mock Backend (#929) * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Pin flask version for flask restx tests. (#931) * Ignore new redis methods. (#932) Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Remove approved paths * Update CI Image (#930) * Update available python versions in CI * Update makefile with overrides * Fix default branch detection for arm builds --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Only get package version once (#928) * Only get package version once * Add disconnect method * Add disconnect method --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add datalib dependency for embedding testing. * Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * Add mock external openai server * Add mocked OpenAI server fixtures * Set up recorded responses. * Clean mock server to depend on http server * Linting * Remove approved paths * Add mocking for embedding endpoint * [Mega-Linter] Apply linters fixes * Add ratelimit headers * [Mega-Linter] Apply linters fixes * Add datalib dependency for embedding testing. --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] commit db63d4598c94048986c0e00ebb2cd8827100b54c Author: Uma Annamalai Date: Mon Oct 2 15:31:38 2023 -0700 Add OpenAI Test Infrastructure (#926) * Add openai to tox * Add OpenAI test files. * Add test functions. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: mergify[bot] * TEMP * Bedrock titan extraction nearly complete * Cleaning up titan bedrock implementation * TEMP * Tests for bedrock passing Co-authored-by: Lalleh Rafeei * Cleaned up titan testing Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Parametrized bedrock testing * Add support for AI21-J2 models * Change to dynamic no conversation id events * Add cohere model * Remove openai instrumentation from this branch * Remove OpenAI from newrelic/config.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * AWS Bedrock Embedding Instrumentation (#957) * AWS Bedrock embedding instrumentation * Correct symbol name * Add support for bedrock claude (#960) Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Combine Botocore Tests (#959) * Initial file migration * Enable DT on all span tests * Add pytest skip for older botocore versions * Fixup: app name merge conflict --------- Co-authored-by: Hannah Stepanek * Initial bedrock error tracing commit * Add status code to mock bedrock server * Updating error response recording logic * Work on bedrock errror tracing * Chat completion error tracing * Adding embedding error tracing * Delete comment * Update moto * Fix botocore tests & re-structure * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Tim Pansino * Package Version Performance Regression (#970) * Fix package version performance regression * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Update tests/agent_unittests/test_package_version_utils.py * Skip test in python 2 --------- Co-authored-by: Hannah Stepanek * Synthetics Info Header Support (#896) * Add support for new synthetics info header * Add testing for new synthetics headers * Linting * Fixup tests for synthetics headers * Add tests for snake and camel casing --------- Co-authored-by: Uma Annamalai * Fix CI Image Permissions for Non-Root Users (#969) * Use shared directory for pyenv * Simplify permissions --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add package_capturing.enabled setting (#982) * Add capture_dependencies.enabled setting * Change setting name --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Revert "Synthetics Info Header Support (#896)" (#983) This reverts commit 398012772d3889701e1f0be51b6315a78b7592ee. * Remove accidental quote from api keys (#985) * Synthetics Info Header Support (#984) * Add support for new synthetics info header * Add testing for new synthetics headers * Linting * Fixup tests for synthetics headers * Add tests for snake and camel casing --------- Co-authored-by: Uma Annamalai * Docker CGroups v2 Utilization Support (#980) * Docker cgroups v2 utilization * Update docker cross agent tests with cgroups v2 * Updated cgroups detection logic * Remove unnecessary grouping --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Testing for supported frameworks in Python 3.12 (#897) * Replaced pkg_resources with importlib.metadata * Add tested/working tests to tox * importlib.metadata version and entry_points logic (#898) * Replaced pkg_resources with importlib.metadata * Fix entry_points logic for Py312 * Fix logic for entry_points * Check to see if list or string * Add Python 3.12 to container setup * Pin dev CI image SHA * Revert sha to latest * Datastores: Replace __version__ with get_package_version (#899) * Replaced pkg_resources with importlib.metadata * Replace pkg_resources in wrapt/importer.py * Add get_package_version from datastores * [Mega-Linter] Apply linters fixes * Push empty commit * Add assert statements for version --------- Co-authored-by: lrafeei * Lambdas and Boto: Replace __version__ with get_package_version (#902) * Replaced pkg_resources with importlib.metadata * Replace pkg_resources in wrapt/importer.py * Add get_package_version for lambdas/boto * Unpin moto version in tests * Fix graphql imports in tox * Add 3.12 release candidate 2 to python versions * Add remaining working 3.12 tests * [Mega-Linter] Apply linters fixes * Trigger test run * [Mega-Linter] Apply linters fixes * Fix some merge issues * Fix some (more) merge issues * Remove old tests in tox * Remove unsupported Django testing (< v2.0) * Fix some tests for agent_features * Fix cherrypy test env in tox * Pin hypercorn (for now) * Add more py312 runs and unpin hypercorn * Adding known working test suites * Add remaining non-working test suites * Fix SKLearn Py 3.12 * Fix typos in odbc * Fix fixture scopes for hypercorn * Add settings patch to fix local testing --------- Co-authored-by: lrafeei Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Tim Pansino * Remove all references to NR staging (#989) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix bug with Structlog CallsiteParameter processor (#990) * Fix bug with CallsiteParameters. Co-authored-by: Tim Pansino Co-authored-by: Hannah Stepanek * Add test for structlog processors. * Add test file for structlog processors. * Fix import ordering. * Move asssertion logic into test file. --------- Co-authored-by: Tim Pansino Co-authored-by: Hannah Stepanek * Update wrapt (#993) * Update wrapt to 1.16.0 * Import duplicate functions directly from wrapt * Update object wrappers for wrapt 1.16.0 * Add warning to wrapt duplicate code * Linting * Use super rather than hard coded Object proxy * Formatting * Add test file for wrapper attributes * Linting * Add descriptions to assertions * Overhaul test suite for clarity * Move functions into fixtures * [Mega-Linter] Apply linters fixes * Bump tests * Fix typo * Larger timeout for protobuf --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino * Patch sentinel bug (#997) Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Update flaskrestx testing (#1004) * Update flaskrestx testing * Update tastypie testing * Reformat tox * Fix tox typo --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove RPM config workflow. (#1007) * Nonced CSP Support (#998) * Add nonce to CSP in browser agent * Adjust nonce position * Add testing for browser timing nonces * Drop py27 from memcache testing. (#1018) * Temporarily pin hypercorn version in tests (#1021) * Temporarily pin hypercorn to <0.16 * Temporarily pin hypercorn to <0.16 * Add comment to tox.ini --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Remove case sensitive check in ASGIBrowserMiddleware check. (#1017) * Remove case sensitive check in should_insert_html. * [Mega-Linter] Apply linters fixes * Remove header decoding. --------- Co-authored-by: umaannamalai * Parallel Wheel Builds (#1024) * Fix import issue in tests * Parallelize wheel building and add muslinux support --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Deprecate get_browser_timing_footer API (#999) * Add nonce to CSP in browser agent * Adjust nonce position * Add testing for browser timing nonces * Deprecated browser timing footer APIs. * Full rip out of browser timing footer * Remove cross agent tests for RUM footer (per repo) * Update cat_map tests * Adjust browser header generation timing accuracy * Fix browser tests * Linting * Apply suggestions from code review --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Deprecate ObjectWrapper API (#996) * Update wrapt to 1.16.0 * Import duplicate functions directly from wrapt * Update object wrappers for wrapt 1.16.0 * Add warning to wrapt duplicate code * Linting * Use super rather than hard coded Object proxy * Formatting * Add test file for wrapper attributes * Unify ObjectWrapper with FunctionWrapper * Remove ObjectWrapper from httplib * Remove ObjectWrapper from tastypie * Replace ObjectWrapper use in console * Remove ObjectWrapper from celery * Remove extra import * Update agent APIs * Deprecate ObjectWrapper * Fix object wrapper imports * More import issues * Fix taskwrapper in celery * Pin last supported flask restx version for 3.7 * Undo tox changes * Change all api.object_wrapper references to use new locations * Fixup: callable_name import * Fixup: callable_name import --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Add checkout actions to deploy workflow (#1027) * Remove Slack section in CONTRIBUTING.rst. (#1029) * Update newrelic/hooks/external_botocore.py * Update newrelic/hooks/external_botocore.py * Update newrelic/hooks/external_botocore.py * Update newrelic/hooks/external_botocore.py * Update tox.ini * Remove unused imports /github/workspace/newrelic/api/web_transaction.py:36:1: F401 'newrelic.core.attribute.create_attributes' imported but unused /github/workspace/newrelic/api/web_transaction.py:36:1: F401 'newrelic.core.attribute.process_user_attribute' imported but unused /github/workspace/newrelic/api/web_transaction.py:37:1: F401 'newrelic.core.attribute_filter.DST_NONE' imported but unused * Fix lint errors /github/workspace/newrelic/common/utilization.py:20:1: F401 'threading' imported but unused /github/workspace/newrelic/common/utilization.py:183:26: E711 comparison to None should be 'if cond is None:' * Fix lint errors /github/workspace/newrelic/console.py:74:1: E402 module level import not at top of file /github/workspace/newrelic/console.py:75:1: E402 module level import not at top of file /github/workspace/newrelic/console.py:76:1: E402 module level import not at top of file /github/workspace/newrelic/console.py:77:1: E402 module level import not at top of file * Fix lint errors /github/workspace/newrelic/core/internal_metrics.py:15:1: F401 'functools' imported but unused /github/workspace/newrelic/core/internal_metrics.py:16:1: F401 'sys' imported but unused /github/workspace/newrelic/core/internal_metrics.py:17:1: F401 'types' imported but unused * Fix lint errors /github/workspace/newrelic/hooks/external_feedparser.py:16:1: F401 'types' imported but unused * Fix lint errors /github/workspace/newrelic/hooks/framework_webpy.py:15:1: F401 'sys' imported but unused * Fix lint errors /github/workspace/newrelic/hooks/template_genshi.py:15:1: F401 'types' imported but unused * Fix lint errors /github/workspace/tests/agent_features/test_configuration.py:49:1: E302 expected 2 blank lines, found 1 * Fix lint errors /github/workspace/tests/agent_features/test_error_events.py:30:1: F401 'testing_support.validators.validate_error_trace_attributes.validate_error_trace_attributes' imported but unused * Fix lint errors /github/workspace/tests/cross_agent/test_docker_container_id.py:16:1: F401 'mock' imported but unused /github/workspace/tests/cross_agent/test_docker_container_id_v2.py:16:1: F401 'mock' imported but unused * Fix lint errors /github/workspace/tests/framework_bottle/test_application.py:18:1: F401 'webtest' imported but unused /github/workspace/tests/framework_bottle/test_application.py:37:1: F811 redefinition of unused 'version' from line 19 /github/workspace/tests/framework_bottle/test_application.py:229:5: F401 'newrelic.agent' imported but unused * Fix lint errors /github/workspace/tests/logger_structlog/conftest.py:15:1: F401 'logging' imported but unused /github/workspace/tests/logger_structlog/conftest.py:19:1: F401 'testing_support.fixtures.collector_available_fixture' imported but unused * Fix lint errors /github/workspace/tests/testing_support/external_fixtures.py:57:5: E125 continuation line with same indent as next logical line /github/workspace/tests/testing_support/external_fixtures.py:113:9: E303 too many blank lines (2) /github/workspace/tests/testing_support/external_fixtures.py:154:19: W292 no newline at end of file * Fix lint errors /github/workspace/tests/testing_support/fixtures.py:813:1: W293 blank line contains whitespace * Fix lint errors /github/workspace/tests/testing_support/validators/validate_synthetics_event.py:21:1: E302 expected 2 blank lines, found 1 /github/workspace/tests/testing_support/validators/validate_synthetics_event.py:71:1: W391 blank line at end of file * Logging Attributes (#1033) * Log Forwarding User Attributes (#682) * Add context data setting Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei * Update record_log_event signature with attributes * Logging attributes initial implementation * Fix settings attribute error * Update logging instrumentation with attributes * Update log handler API * Add loguru support for extra attrs * Add more explicit messaging to validator * Expanding testing for record_log_event * Expand record log event testing * Fix settings typo * Remove missing loguru attributes from test * Adjust safe log attr encoding * Correct py2 issues * Fix missing record attrs in logging. Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Log Attribute Filtering (#1008) * Expand validator for log events * Add settings for context data filtering * Add attribute filtering for log events * Linting * Apply suggestions from code review * Remove none check on attributes * Squashed commit of the following: commit 3962f54d91bef1980523f40eb4649ef354634396 Author: Uma Annamalai Date: Thu Jan 4 12:50:58 2024 -0800 Remove case sensitive check in ASGIBrowserMiddleware check. (#1017) * Remove case sensitive check in should_insert_html. * [Mega-Linter] Apply linters fixes * Remove header decoding. --------- Co-authored-by: umaannamalai commit c3314aeac97b21615252f18c98384840a47db06f Author: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Date: Tue Jan 2 17:17:20 2024 -0800 Temporarily pin hypercorn version in tests (#1021) * Temporarily pin hypercorn to <0.16 * Temporarily pin hypercorn to <0.16 * Add comment to tox.ini --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> commit 13571451da8e48a8a2ea96d110e45b8e4ef537e3 Author: Uma Annamalai Date: Tue Jan 2 16:17:08 2024 -0800 Drop py27 from memcache testing. (#1018) commit 23f969fcfa9e1bf52e80766be0786152229cd43c Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Wed Dec 20 17:01:50 2023 -0800 Nonced CSP Support (#998) * Add nonce to CSP in browser agent * Adjust nonce position * Add testing for browser timing nonces commit 8bfd2b788b222534639523f294577d724a3be5bb Author: Uma Annamalai Date: Mon Dec 18 13:58:10 2023 -0800 Remove RPM config workflow. (#1007) * Add Dictionary Log Message Support (#1014) * Add tests for logging's json logging * Upgrade record_log_event to handle dict logging * Update logging to capture dict messages * Add attributes for dict log messages * Implementation of JSON message filtering * Correct attributes only log behavior * Testing for logging attributes * Add logging context test for py2 * Logically separate attribute tests * Clean out imports * Fix failing tests * Remove logging instrumentation changes for new PR * Add test for record log event edge cases * Update record_log_event for code review * Fix truncation * Move safe_json_encode back to api.log as it's unused elsewhere * Black formatting * Add missing import * Fixup warning message --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Logging Attribute Instrumentation (#1015) * Add tests for logging's json logging * Upgrade record_log_event to handle dict logging * Update logging to capture dict messages * Add attributes for dict log messages * Implementation of JSON message filtering * Correct attributes only log behavior * Testing for logging attributes * Add logging context test for py2 * Logically separate attribute tests * Clean out imports * Fix failing tests * Linting * Ignore path hash * Fix linter errors * Fix linting issues * Apply suggestions from code review * StructLog Attribute Instrumentation (#1026) * Add tests for logging's json logging * Upgrade record_log_event to handle dict logging * Update logging to capture dict messages * Add attributes for dict log messages * Implementation of JSON message filtering * Correct attributes only log behavior * Testing for logging attributes * Add logging context test for py2 * Logically separate attribute tests * Clean out imports * Fix failing tests * Structlog cleanup * Attempting list instrumentation * Structlog attributes support Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai * Remove other frameworks changes * Bump tests * Change cache to lru cache * Linting * Remove TODO * Remove unnecessary check --------- Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai * Loguru Attribute Instrumentation (#1025) * Add tests for logging's json logging * Upgrade record_log_event to handle dict logging * Update logging to capture dict messages * Add attributes for dict log messages * Implementation of JSON message filtering * Correct attributes only log behavior * Testing for logging attributes * Add logging context test for py2 * Logically separate attribute tests * Clean out imports * Fix failing tests * Structlog cleanup * Attempting list instrumentation * Structlog attributes support Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai * Loguru instrumentation refactor * New attribute testing * Move exception settings * Clean up testing * Remove unneeded option * Remove other framework changes * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: TimPansino Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Temporarily pin starlette tests * Update web_transaction.py --------- Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: TimPansino * Obfuscate License Keys in Logs (#1031) * Obfuscate license keys * Run formatter * Fix None errors in obfuscate_license_key * Obfuscate API keys from headers * Add lowercase api-key to denied headers * Change audit log header filters to be case insensitive --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Instrument Lantern vectorstore * Fix instrumentation for openai 1.8.0 --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: SlavaSkvortsov <29122694+SlavaSkvortsov@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Tim Pansino Co-authored-by: Tim Pansino Co-authored-by: Uma Annamalai --- .devcontainer/Dockerfile | 10 +- .github/actions/update-rpm-config/action.yml | 109 --- .github/containers/Dockerfile | 6 +- .github/workflows/deploy-python.yml | 158 ++++- .github/workflows/tests.yml | 58 -- CONTRIBUTING.rst | 11 - newrelic/admin/__init__.py | 70 +- newrelic/admin/license_key.py | 27 +- newrelic/admin/validate_config.py | 3 +- newrelic/agent.py | 5 +- newrelic/api/application.py | 7 +- newrelic/api/asgi_application.py | 18 +- newrelic/api/cat_header_mixin.py | 6 +- newrelic/api/log.py | 53 +- newrelic/api/message_trace.py | 1 + newrelic/api/solr_trace.py | 3 +- newrelic/api/transaction.py | 100 ++- newrelic/api/web_transaction.py | 651 +++++++++--------- newrelic/api/wsgi_application.py | 18 +- newrelic/common/agent_http.py | 17 +- newrelic/common/encoding_utils.py | 231 ++++--- newrelic/common/object_wrapper.py | 222 ++---- newrelic/common/package_version_utils.py | 1 + newrelic/common/signature.py | 4 +- newrelic/common/utilization.py | 195 +++--- newrelic/config.py | 61 +- newrelic/console.py | 18 +- newrelic/core/agent.py | 6 +- newrelic/core/application.py | 13 +- newrelic/core/attribute.py | 27 + newrelic/core/attribute_filter.py | 81 ++- newrelic/core/config.py | 21 + newrelic/core/environment.py | 77 ++- newrelic/core/internal_metrics.py | 31 +- newrelic/core/stats_engine.py | 57 +- newrelic/core/transaction_node.py | 14 + newrelic/hooks/application_celery.py | 21 +- newrelic/hooks/component_piston.py | 7 +- newrelic/hooks/component_tastypie.py | 19 +- newrelic/hooks/external_botocore.py | 63 +- newrelic/hooks/external_feedparser.py | 29 +- newrelic/hooks/external_httplib.py | 27 +- newrelic/hooks/framework_django.py | 89 +-- newrelic/hooks/framework_pylons.py | 9 +- newrelic/hooks/framework_pyramid.py | 10 +- newrelic/hooks/framework_web2py.py | 3 +- newrelic/hooks/framework_webpy.py | 32 +- newrelic/hooks/logger_logging.py | 17 +- newrelic/hooks/logger_loguru.py | 31 +- newrelic/hooks/logger_structlog.py | 95 ++- .../hooks/messagebroker_confluentkafka.py | 8 +- newrelic/hooks/messagebroker_kafkapython.py | 13 +- newrelic/hooks/messagebroker_pika.py | 2 +- newrelic/hooks/middleware_flask_compress.py | 80 +-- newrelic/hooks/mlmodel_langchain.py | 2 + newrelic/hooks/mlmodel_openai.py | 25 +- newrelic/hooks/template_genshi.py | 30 +- newrelic/hooks/template_mako.py | 4 +- newrelic/packages/wrapt/__init__.py | 11 +- newrelic/packages/wrapt/__wrapt__.py | 14 + newrelic/packages/wrapt/_wrappers.c | 45 +- newrelic/packages/wrapt/decorators.py | 2 +- newrelic/packages/wrapt/importer.py | 129 ++-- newrelic/packages/wrapt/patches.py | 141 ++++ newrelic/packages/wrapt/weakrefs.py | 98 +++ newrelic/packages/wrapt/wrappers.py | 268 +------ setup.cfg | 2 +- setup.py | 1 + tests/adapter_hypercorn/test_hypercorn.py | 12 +- tests/agent_features/test_asgi_browser.py | 56 +- tests/agent_features/test_browser.py | 98 ++- tests/agent_features/test_configuration.py | 3 + tests/agent_features/test_error_events.py | 19 +- tests/agent_features/test_lambda_handler.py | 12 + tests/agent_features/test_log_events.py | 358 ++++++++-- tests/agent_features/test_logs_in_context.py | 98 +-- tests/agent_features/test_ml_events.py | 2 +- tests/agent_features/test_serverless_mode.py | 2 + tests/agent_features/test_synthetics.py | 98 ++- ...n_event_data_and_some_browser_stuff_too.py | 27 +- .../agent_streaming/test_infinite_tracing.py | 4 +- tests/agent_unittests/test_encoding_utils.py | 52 ++ tests/agent_unittests/test_environment.py | 21 + tests/agent_unittests/test_harvest_loop.py | 4 + .../test_package_version_utils.py | 2 +- tests/agent_unittests/test_wrappers.py | 81 +++ .../fixtures/docker_container_id_v2/README.md | 6 + .../docker_container_id_v2/cases.json | 36 + .../docker-20.10.16.txt | 24 + .../docker_container_id_v2/docker-24.0.2.txt | 21 + .../docker-too-long.txt | 21 + .../fixtures/docker_container_id_v2/empty.txt | 0 .../invalid-characters.txt | 21 + .../docker_container_id_v2/invalid-length.txt | 21 + .../fixtures/rum_client_config.json | 91 --- .../close-body-in-comment.html | 26 - .../dynamic-iframe.html | 35 - tests/cross_agent/test_cat_map.py | 6 +- ..._docker.py => test_docker_container_id.py} | 41 +- .../test_docker_container_id_v2.py | 60 ++ tests/cross_agent/test_lambda_event_source.py | 51 +- tests/cross_agent/test_rum_client_config.py | 145 ---- tests/datastore_asyncpg/test_multiple_dbs.py | 12 +- tests/datastore_asyncpg/test_query.py | 12 +- tests/datastore_mysql/test_database.py | 22 +- tests/datastore_psycopg2cffi/test_database.py | 7 +- tests/external_botocore/test_boto3_iam.py | 3 +- tests/external_botocore/test_boto3_s3.py | 3 +- tests/external_botocore/test_boto3_sns.py | 3 +- .../test_botocore_dynamodb.py | 3 +- tests/external_botocore/test_botocore_ec2.py | 3 +- tests/external_botocore/test_botocore_s3.py | 5 +- tests/external_botocore/test_botocore_sqs.py | 9 +- tests/external_requests/test_requests.py | 14 +- tests/external_urllib3/test_urllib3.py | 18 +- tests/framework_bottle/test_application.py | 258 +++---- tests/framework_cherrypy/test_application.py | 120 ++-- tests/framework_django/templates/main.html | 1 - tests/framework_django/test_application.py | 539 +++++++-------- tests/framework_django/views.py | 81 ++- tests/framework_flask/_test_compress.py | 80 ++- tests/framework_flask/test_application.py | 301 ++++---- tests/framework_flask/test_compress.py | 99 +-- .../test_append_slash_app.py | 71 +- tests/framework_pyramid/test_application.py | 200 +++--- tests/logger_logging/conftest.py | 10 +- tests/logger_logging/test_attributes.py | 90 +++ tests/logger_logging/test_local_decorating.py | 15 +- tests/logger_logging/test_log_forwarding.py | 48 +- tests/logger_logging/test_logging_handler.py | 76 +- tests/logger_loguru/conftest.py | 16 +- tests/logger_loguru/test_attributes.py | 70 ++ tests/logger_loguru/test_stack_inspection.py | 56 -- tests/logger_structlog/conftest.py | 92 ++- .../test_attribute_forwarding.py | 49 -- tests/logger_structlog/test_attributes.py | 97 +++ .../logger_structlog/test_local_decorating.py | 4 +- .../test_structlog_processors.py | 25 + .../test_producer.py | 22 +- .../test_producer.py | 20 + tests/mlmodel_sklearn/test_linear_models.py | 5 + tests/testing_support/external_fixtures.py | 100 +-- tests/testing_support/fixtures.py | 48 ++ tests/testing_support/sample_applications.py | 5 +- .../validators/validate_browser_attributes.py | 16 +- .../validators/validate_log_event_count.py | 10 +- ...ate_log_event_count_outside_transaction.py | 10 +- .../validators/validate_log_events.py | 38 +- ...validate_log_events_outside_transaction.py | 26 +- .../validators/validate_synthetics_event.py | 11 +- tox.ini | 229 +++--- 151 files changed, 4733 insertions(+), 3590 deletions(-) delete mode 100644 .github/actions/update-rpm-config/action.yml create mode 100644 newrelic/packages/wrapt/__wrapt__.py create mode 100644 newrelic/packages/wrapt/patches.py create mode 100644 newrelic/packages/wrapt/weakrefs.py create mode 100644 tests/agent_unittests/test_encoding_utils.py create mode 100644 tests/agent_unittests/test_wrappers.py create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/README.md create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/cases.json create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/docker-20.10.16.txt create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/docker-24.0.2.txt create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/docker-too-long.txt create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/empty.txt create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/invalid-characters.txt create mode 100644 tests/cross_agent/fixtures/docker_container_id_v2/invalid-length.txt delete mode 100644 tests/cross_agent/fixtures/rum_client_config.json delete mode 100644 tests/cross_agent/fixtures/rum_footer_insertion_location/close-body-in-comment.html delete mode 100644 tests/cross_agent/fixtures/rum_footer_insertion_location/dynamic-iframe.html rename tests/cross_agent/{test_docker.py => test_docker_container_id.py} (50%) create mode 100644 tests/cross_agent/test_docker_container_id_v2.py delete mode 100644 tests/cross_agent/test_rum_client_config.py create mode 100644 tests/logger_logging/test_attributes.py create mode 100644 tests/logger_loguru/test_attributes.py delete mode 100644 tests/logger_loguru/test_stack_inspection.py delete mode 100644 tests/logger_structlog/test_attribute_forwarding.py create mode 100644 tests/logger_structlog/test_attributes.py create mode 100644 tests/logger_structlog/test_structlog_processors.py diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index bc4a5324a1..3f892b407a 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -5,18 +5,16 @@ FROM ghcr.io/newrelic/newrelic-python-agent-ci:${IMAGE_TAG} # Setup non-root user USER root ARG UID=1000 -ARG GID=$UID +ARG GID=${UID} ENV HOME /home/vscode RUN mkdir -p ${HOME} && \ groupadd --gid ${GID} vscode && \ useradd --uid ${UID} --gid ${GID} --home ${HOME} vscode && \ chown -R ${UID}:${GID} /home/vscode -# Move pyenv installation -ENV PYENV_ROOT="${HOME}/.pyenv" -ENV PATH="$PYENV_ROOT/bin:$PYENV_ROOT/shims:${PATH}" -RUN mv /root/.pyenv /home/vscode/.pyenv && \ - chown -R vscode:vscode /home/vscode/.pyenv +# Fix pyenv installation +RUN echo 'eval "$(pyenv init -)"' >>${HOME}/.bashrc && \ + chown -R vscode:vscode ${PYENV_ROOT} # Set user USER ${UID}:${GID} diff --git a/.github/actions/update-rpm-config/action.yml b/.github/actions/update-rpm-config/action.yml deleted file mode 100644 index 9d19ebba0b..0000000000 --- a/.github/actions/update-rpm-config/action.yml +++ /dev/null @@ -1,109 +0,0 @@ -name: "update-rpm-config" -description: "Set current version of agent in rpm config using API." -inputs: - agent-language: - description: "Language agent to configure (eg. python)" - required: true - default: "python" - target-system: - description: "Target System: prod|staging|all" - required: true - default: "all" - agent-version: - description: "3-4 digit agent version number (eg. 1.2.3) with optional leading v (ignored)" - required: true - dry-run: - description: "Dry Run" - required: true - default: "false" - production-api-key: - description: "API key for New Relic Production" - required: false - staging-api-key: - description: "API key for New Relic Staging" - required: false - -runs: - using: "composite" - steps: - - name: Trim potential leading v from agent version - shell: bash - run: | - AGENT_VERSION=${{ inputs.agent-version }} - echo "AGENT_VERSION=${AGENT_VERSION#"v"}" >> $GITHUB_ENV - - - name: Generate Payload - shell: bash - run: | - echo "PAYLOAD='{ \"system_configuration\": { \"key\": \"${{ inputs.agent-language }}_agent_version\", \"value\": \"${{ env.AGENT_VERSION }}\" } }'" >> $GITHUB_ENV - - - name: Generate Content-Type - shell: bash - run: | - echo "CONTENT_TYPE='Content-Type: application/json'" >> $GITHUB_ENV - - - name: Update Staging system configuration page - shell: bash - if: ${{ inputs.dry-run == 'false' && (inputs.target-system == 'staging' || inputs.target-system == 'all') }} - run: | - curl -X POST 'https://staging-api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:${{ inputs.staging-api-key }}" -i \ - -H ${{ env.CONTENT_TYPE }} \ - -d ${{ env.PAYLOAD }} - - - name: Update Production system configuration page - shell: bash - if: ${{ inputs.dry-run == 'false' && (inputs.target-system == 'prod' || inputs.target-system == 'all') }} - run: | - curl -X POST 'https://api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:${{ inputs.production-api-key }}" -i \ - -H ${{ env.CONTENT_TYPE }} \ - -d ${{ env.PAYLOAD }} - - - name: Verify Staging system configuration update - shell: bash - if: ${{ inputs.dry-run == 'false' && (inputs.target-system == 'staging' || inputs.target-system == 'all') }} - run: | - STAGING_VERSION=$(curl -X GET 'https://staging-api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:${{ inputs.staging-api-key }}" \ - -H "${{ env.CONTENT_TYPE }}" | jq ".system_configurations | from_entries | .${{inputs.agent-language}}_agent_version") - - if [ "${{ env.AGENT_VERSION }}" != "$STAGING_VERSION" ]; then - echo "Staging version mismatch: $STAGING_VERSION" - exit 1 - fi - - - name: Verify Production system configuration update - shell: bash - if: ${{ inputs.dry-run == 'false' && (inputs.target-system == 'prod' || inputs.target-system == 'all') }} - run: | - PROD_VERSION=$(curl -X GET 'https://api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:${{ inputs.production-api-key }}" \ - -H "${{ env.CONTENT_TYPE }}" | jq ".system_configurations | from_entries | .${{inputs.agent-language}}_agent_version") - - if [ "${{ env.AGENT_VERSION }}" != "$PROD_VERSION" ]; then - echo "Production version mismatch: $PROD_VERSION" - exit 1 - fi - - - name: (dry-run) Update Staging system configuration page - shell: bash - if: ${{ inputs.dry-run != 'false' && (inputs.target-system == 'staging' || inputs.target-system == 'all') }} - run: | - cat << EOF - curl -X POST 'https://staging-api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:**REDACTED**" -i \ - -H ${{ env.CONTENT_TYPE }} \ - -d ${{ env.PAYLOAD }} - EOF - - - name: (dry-run) Update Production system configuration page - shell: bash - if: ${{ inputs.dry-run != 'false' && (inputs.target-system == 'prod' || inputs.target-system == 'all') }} - run: | - cat << EOF - curl -X POST 'https://api.newrelic.com/v2/system_configuration.json' \ - -H "X-Api-Key:**REDACTED**" -i \ - -H ${{ env.CONTENT_TYPE }} \ - -d ${{ env.PAYLOAD }} - EOF diff --git a/.github/containers/Dockerfile b/.github/containers/Dockerfile index 57d8c234c9..d2d8e90241 100644 --- a/.github/containers/Dockerfile +++ b/.github/containers/Dockerfile @@ -89,10 +89,10 @@ ENV HOME /root WORKDIR "${HOME}" # Install pyenv -ENV PYENV_ROOT="${HOME}/.pyenv" +ENV PYENV_ROOT="/usr/local/pyenv" RUN curl https://pyenv.run/ | /bin/bash -ENV PATH="$PYENV_ROOT/bin:$PYENV_ROOT/shims:${PATH}" -RUN echo 'eval "$(pyenv init -)"' >>$HOME/.bashrc && \ +ENV PATH="${PYENV_ROOT}/bin:${PYENV_ROOT}/shims:${PATH}" +RUN echo 'eval "$(pyenv init -)"' >>${HOME}/.bashrc && \ pyenv update # Install Python diff --git a/.github/workflows/deploy-python.yml b/.github/workflows/deploy-python.yml index ca908b8250..a579703034 100644 --- a/.github/workflows/deploy-python.yml +++ b/.github/workflows/deploy-python.yml @@ -20,17 +20,125 @@ on: - published jobs: - deploy-linux: + build-linux-py3: runs-on: ubuntu-latest + strategy: + fail-fast: true + matrix: + wheel: + - cp37-manylinux + - cp37-musllinux + - cp38-manylinux + - cp38-musllinux + - cp39-manylinux + - cp39-musllinux + - cp310-manylinux + - cp310-musllinux + - cp311-manylinux + - cp311-musllinux + - cp312-manylinux + - cp312-musllinux steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 with: persist-credentials: false fetch-depth: 0 - name: Setup QEMU - uses: docker/setup-qemu-action@v1 + uses: docker/setup-qemu-action@v3 + + - name: Build Wheels + uses: pypa/cibuildwheel@v2.16.2 + env: + CIBW_PLATFORM: linux + CIBW_BUILD: "${{ matrix.wheel }}*" + CIBW_ARCHS_LINUX: x86_64 aarch64 + CIBW_ENVIRONMENT: "LD_LIBRARY_PATH=/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/usr/local/lib64:/usr/local/lib" + CIBW_TEST_REQUIRES: pytest + CIBW_TEST_COMMAND: "PYTHONPATH={project}/tests pytest {project}/tests/agent_unittests -vx" + + - name: Upload Artifacts + uses: actions/upload-artifact@v4.0.0 + with: + name: ${{ github.job }}-${{ matrix.wheel }} + path: ./wheelhouse/*.whl + retention-days: 1 + + build-linux-py2: + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + with: + persist-credentials: false + fetch-depth: 0 + + - name: Setup QEMU + uses: docker/setup-qemu-action@v3 + + - name: Build Wheels + uses: pypa/cibuildwheel@v1.12.0 + env: + CIBW_PLATFORM: linux + CIBW_BUILD: cp27-manylinux_x86_64 + CIBW_ARCHS_LINUX: x86_64 + CIBW_ENVIRONMENT: "LD_LIBRARY_PATH=/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/usr/local/lib64:/usr/local/lib" + CIBW_TEST_REQUIRES: pytest==4.6.11 + CIBW_TEST_COMMAND: "PYTHONPATH={project}/tests pytest {project}/tests/agent_unittests -vx" + + - name: Upload Artifacts + uses: actions/upload-artifact@v4.0.0 + with: + name: ${{ github.job }} + path: ./wheelhouse/*.whl + retention-days: 1 + + build-sdist: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + with: + persist-credentials: false + fetch-depth: 0 + + - name: Install Dependencies + run: | + pip install -U pip + pip install -U setuptools + + - name: Build Source Package + run: | + python setup.py sdist + + - name: Prepare MD5 Hash File + run: | + tarball="$(python setup.py --fullname).tar.gz" + md5_file=${tarball}.md5 + openssl md5 -binary dist/${tarball} | xxd -p | tr -d '\n' > dist/${md5_file} + + - name: Upload Artifacts + uses: actions/upload-artifact@v4.0.0 + with: + name: ${{ github.job }}-sdist + path: | + ./dist/*.tar.gz + ./dist/*.tar.gz.md5 + retention-days: 1 + + deploy: + runs-on: ubuntu-latest + + needs: + - build-linux-py3 + - build-linux-py2 + - build-sdist + + steps: + - uses: actions/checkout@v4 + with: + persist-credentials: false + fetch-depth: 0 - uses: actions/setup-python@v2 with: @@ -42,32 +150,22 @@ jobs: pip install -U pip pip install -U wheel setuptools twine - - name: Build Source Package - run: python setup.py sdist - - - name: Build Manylinux Wheels (Python 2) - uses: pypa/cibuildwheel@v1.12.0 - env: - CIBW_PLATFORM: linux - CIBW_BUILD: cp27-manylinux_x86_64 - CIBW_ARCHS: x86_64 - CIBW_ENVIRONMENT: "LD_LIBRARY_PATH=/opt/rh/=vtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/usr/local/lib64:/usr/local/lib" + - name: Download Artifacts + uses: actions/download-artifact@v4.1.0 + with: + path: ./artifacts/ - - name: Build Manylinux Wheels (Python 3) - uses: pypa/cibuildwheel@v2.11.1 - env: - CIBW_PLATFORM: linux - CIBW_BUILD: cp37-manylinux* cp38-manylinux* cp39-manylinux* cp310-manylinux* cp311-manylinux* - CIBW_ARCHS: x86_64 aarch64 - CIBW_ENVIRONMENT: "LD_LIBRARY_PATH=/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/usr/local/lib64:/usr/local/lib" + - name: Unpack Artifacts + run: | + mkdir -p dist/ + mv artifacts/**/*{.whl,.tar.gz,.tar.gz.md5} dist/ - name: Upload Package to S3 run: | tarball="$(python setup.py --fullname).tar.gz" - md5_file=$(mktemp) - openssl md5 -binary dist/$tarball | xxd -p | tr -d '\n' > $md5_file - aws s3 cp $md5_file $S3_DST/${tarball}.md5 - aws s3 cp dist/$tarball $S3_DST/$tarball + md5_file=${tarball}.md5 + aws s3 cp dist/${md5_file} $S3_DST/${md5_file} + aws s3 cp dist/${tarball} $S3_DST/${tarball} env: S3_DST: s3://nr-downloads-main/python_agent/release AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} @@ -76,17 +174,7 @@ jobs: - name: Upload Package to PyPI run: | - twine upload --non-interactive dist/*.tar.gz wheelhouse/*-manylinux*.whl + twine upload --non-interactive dist/*.tar.gz dist/*.whl env: TWINE_USERNAME: __token__ TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }} - - - name: Update RPM Config - uses: ./.github/actions/update-rpm-config - with: - agent-language: "python" - target-system: "all" - agent-version: "${{ github.ref_name }}" - dry-run: "false" - production-api-key: ${{ secrets.NEW_RELIC_API_KEY_PRODUCTION }}" - staging-api-key: ${{ secrets.NEW_RELIC_API_KEY_STAGING }}" diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 402d0c629c..b44aa8e841 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -36,7 +36,6 @@ jobs: - python - elasticsearchserver07 - elasticsearchserver08 - - gearman - grpc - kafka - memcached @@ -967,63 +966,6 @@ jobs: path: ./**/.coverage.* retention-days: 1 - gearman: - env: - TOTAL_GROUPS: 1 - - strategy: - fail-fast: false - matrix: - group-number: [1] - - runs-on: ubuntu-20.04 - container: - image: ghcr.io/newrelic/newrelic-python-agent-ci:latest - options: >- - --add-host=host.docker.internal:host-gateway - timeout-minutes: 30 - - services: - gearman: - image: artefactual/gearmand - ports: - - 8080:4730 - # Set health checks to wait until gearman has started - options: >- - --health-cmd "(echo status ; sleep 0.1) | nc 127.0.0.1 4730 -w 1" - --health-interval 10s - --health-timeout 5s - --health-retries 5 - - steps: - - uses: actions/checkout@v3 - - - name: Fetch git tags - run: | - git config --global --add safe.directory "$GITHUB_WORKSPACE" - git fetch --tags origin - - - name: Get Environments - id: get-envs - run: | - echo "envs=$(tox -l | grep '^${{ github.job }}\-' | ./.github/workflows/get-envs.py)" >> $GITHUB_OUTPUT - env: - GROUP_NUMBER: ${{ matrix.group-number }} - - - name: Test - run: | - tox -vv -e ${{ steps.get-envs.outputs.envs }} -p auto - env: - TOX_PARALLEL_NO_SPINNER: 1 - PY_COLORS: 0 - - - name: Upload Coverage Artifacts - uses: actions/upload-artifact@v3 - with: - name: coverage-${{ github.job }}-${{ strategy.job-index }} - path: ./**/.coverage.* - retention-days: 1 - firestore: env: TOTAL_GROUPS: 1 diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 12081d1ee7..d525b7df4d 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -228,14 +228,3 @@ entering the directory of the tests you want to run. Then, run the following command: ``tox -c tox.ini -e [test environment]`` - -******* - Slack -******* - -We host a public Slack with a dedicated channel for contributors and -maintainers of open source projects hosted by New Relic. If you are -contributing to this project, you're welcome to request access to the -#oss-contributors channel in the newrelicusers.slack.com workspace. To -request access, please use this `link -`__. diff --git a/newrelic/admin/__init__.py b/newrelic/admin/__init__.py index e41599a318..509037dd50 100644 --- a/newrelic/admin/__init__.py +++ b/newrelic/admin/__init__.py @@ -14,27 +14,26 @@ from __future__ import print_function -import sys import logging +import sys _builtin_plugins = [ - 'debug_console', - 'generate_config', - 'license_key', - 'local_config', - 'network_config', - 'record_deploy', - 'run_program', - 'run_python', - 'server_config', - 'validate_config' + "debug_console", + "generate_config", + "license_key", + "local_config", + "network_config", + "record_deploy", + "run_program", + "run_python", + "server_config", + "validate_config", ] _commands = {} -def command(name, options='', description='', hidden=False, - log_intercept=True, deprecated=False): +def command(name, options="", description="", hidden=False, log_intercept=True, deprecated=False): def wrapper(callback): callback.name = name callback.options = options @@ -44,6 +43,7 @@ def wrapper(callback): callback.deprecated = deprecated _commands[name] = callback return callback + return wrapper @@ -51,15 +51,15 @@ def usage(name): details = _commands[name] if details.deprecated: print("[WARNING] This command is deprecated and will be removed") - print('Usage: newrelic-admin %s %s' % (name, details.options)) + print("Usage: newrelic-admin %s %s" % (name, details.options)) -@command('help', '[command]', hidden=True) +@command("help", "[command]", hidden=True) def help(args): if not args: - print('Usage: newrelic-admin command [options]') + print("Usage: newrelic-admin command [options]") print() - print("Type 'newrelic-admin help '", end='') + print("Type 'newrelic-admin help '", end="") print("for help on a specific command.") print() print("Available commands are:") @@ -68,24 +68,24 @@ def help(args): for name in commands: details = _commands[name] if not details.hidden: - print(' ', name) + print(" ", name) else: name = args[0] if name not in _commands: - print("Unknown command '%s'." % name, end=' ') + print("Unknown command '%s'." % name, end=" ") print("Type 'newrelic-admin help' for usage.") else: details = _commands[name] - print('Usage: newrelic-admin %s %s' % (name, details.options)) + print("Usage: newrelic-admin %s %s" % (name, details.options)) if details.description: print() description = details.description if details.deprecated: - description = '[DEPRECATED] ' + description + description = "[DEPRECATED] " + description print(description) @@ -99,7 +99,7 @@ def emit(self, record): if len(logging.root.handlers) != 0: return - if record.name.startswith('newrelic.packages'): + if record.name.startswith("newrelic.packages"): return if record.levelno < logging.WARNING: @@ -107,9 +107,9 @@ def emit(self, record): return logging.StreamHandler.emit(self, record) - _stdout_logger = logging.getLogger('newrelic') + _stdout_logger = logging.getLogger("newrelic") _stdout_handler = FilteredStreamHandler(sys.stdout) - _stdout_format = '%(levelname)s - %(message)s\n' + _stdout_format = "%(levelname)s - %(message)s\n" _stdout_formatter = logging.Formatter(_stdout_format) _stdout_handler.setFormatter(_stdout_formatter) _stdout_logger.addHandler(_stdout_handler) @@ -117,19 +117,27 @@ def emit(self, record): def load_internal_plugins(): for name in _builtin_plugins: - module_name = '%s.%s' % (__name__, name) + module_name = "%s.%s" % (__name__, name) __import__(module_name) def load_external_plugins(): try: - import pkg_resources + # Preferred after Python 3.10 + if sys.version_info >= (3, 10): + from importlib.metadata import entry_points + # Introduced in Python 3.8 + elif sys.version_info >= (3, 8) and sys.version_info <= (3, 9): + from importlib_metadata import entry_points + # Removed in Python 3.12 + else: + from pkg_resources import iter_entry_points as entry_points except ImportError: return - group = 'newrelic.admin' + group = "newrelic.admin" - for entrypoint in pkg_resources.iter_entry_points(group=group): + for entrypoint in entry_points(group=group): __import__(entrypoint.module_name) @@ -138,12 +146,12 @@ def main(): if len(sys.argv) > 1: command = sys.argv[1] else: - command = 'help' + command = "help" callback = _commands[command] except Exception: - print("Unknown command '%s'." % command, end='') + print("Unknown command '%s'." % command, end="") print("Type 'newrelic-admin help' for usage.") sys.exit(1) @@ -156,5 +164,5 @@ def main(): load_internal_plugins() load_external_plugins() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/newrelic/admin/license_key.py b/newrelic/admin/license_key.py index 35aaed1f41..e1eaaa39b2 100644 --- a/newrelic/admin/license_key.py +++ b/newrelic/admin/license_key.py @@ -15,18 +15,22 @@ from __future__ import print_function from newrelic.admin import command, usage +from newrelic.common.encoding_utils import obfuscate_license_key -@command('license-key', 'config_file [log_file]', -"""Prints out the account license key after having loaded the settings -from .""") +@command( + "license-key", + "config_file [log_file]", + """Prints out an obfuscated account license key after having loaded the settings +from .""", +) def license_key(args): + import logging import os import sys - import logging if len(args) == 0: - usage('license-key') + usage("license-key") sys.exit(1) from newrelic.config import initialize @@ -35,7 +39,7 @@ def license_key(args): if len(args) >= 2: log_file = args[1] else: - log_file = '/tmp/python-agent-test.log' + log_file = "/tmp/python-agent-test.log" log_level = logging.DEBUG @@ -45,14 +49,13 @@ def license_key(args): pass config_file = args[0] - environment = os.environ.get('NEW_RELIC_ENVIRONMENT') + environment = os.environ.get("NEW_RELIC_ENVIRONMENT") - if config_file == '-': - config_file = os.environ.get('NEW_RELIC_CONFIG_FILE') + if config_file == "-": + config_file = os.environ.get("NEW_RELIC_CONFIG_FILE") - initialize(config_file, environment, ignore_errors=False, - log_file=log_file, log_level=log_level) + initialize(config_file, environment, ignore_errors=False, log_file=log_file, log_level=log_level) _settings = global_settings() - print('license_key = %r' % _settings.license_key) + print("license_key = %r" % obfuscate_license_key(_settings.license_key)) diff --git a/newrelic/admin/validate_config.py b/newrelic/admin/validate_config.py index ac25b715e1..64645b0c62 100644 --- a/newrelic/admin/validate_config.py +++ b/newrelic/admin/validate_config.py @@ -149,6 +149,7 @@ def validate_config(args): sys.exit(1) from newrelic.api.application import register_application + from newrelic.common.encoding_utils import obfuscate_license_key from newrelic.config import initialize from newrelic.core.config import global_settings @@ -200,7 +201,7 @@ def validate_config(args): _logger.debug("Proxy port is %r.", _settings.proxy_port) _logger.debug("Proxy user is %r.", _settings.proxy_user) - _logger.debug("License key is %r.", _settings.license_key) + _logger.debug("License key is %r.", obfuscate_license_key(_settings.license_key)) _timeout = 30.0 diff --git a/newrelic/agent.py b/newrelic/agent.py index bc6cdbbd3a..fc139405f8 100644 --- a/newrelic/agent.py +++ b/newrelic/agent.py @@ -15,7 +15,7 @@ from newrelic.api.application import application_instance as __application from newrelic.api.application import application_settings as __application_settings from newrelic.api.application import register_application as __register_application -from newrelic.api.log import NewRelicContextFormatter # noqa +from newrelic.api.log import NewRelicContextFormatter as __NewRelicContextFormatter from newrelic.api.time_trace import ( add_custom_span_attribute as __add_custom_span_attribute, ) @@ -178,6 +178,7 @@ def __asgi_application(*args, **kwargs): from newrelic.api.web_transaction import web_transaction as __web_transaction from newrelic.api.web_transaction import wrap_web_transaction as __wrap_web_transaction from newrelic.common.object_names import callable_name as __callable_name +from newrelic.common.object_wrapper import CallableObjectProxy as __CallableObjectProxy from newrelic.common.object_wrapper import FunctionWrapper as __FunctionWrapper from newrelic.common.object_wrapper import InFunctionWrapper as __InFunctionWrapper from newrelic.common.object_wrapper import ObjectProxy as __ObjectProxy @@ -280,6 +281,7 @@ def __asgi_application(*args, **kwargs): wrap_background_task = __wrap_api_call(__wrap_background_task, "wrap_background_task") LambdaHandlerWrapper = __wrap_api_call(__LambdaHandlerWrapper, "LambdaHandlerWrapper") lambda_handler = __wrap_api_call(__lambda_handler, "lambda_handler") +NewRelicContextFormatter = __wrap_api_call(__NewRelicContextFormatter, "NewRelicContextFormatter") transaction_name = __wrap_api_call(__transaction_name, "transaction_name") TransactionNameWrapper = __wrap_api_call(__TransactionNameWrapper, "TransactionNameWrapper") wrap_transaction_name = __wrap_api_call(__wrap_transaction_name, "wrap_transaction_name") @@ -320,6 +322,7 @@ def __asgi_application(*args, **kwargs): wrap_message_transaction = __wrap_api_call(__wrap_message_transaction, "wrap_message_transaction") callable_name = __wrap_api_call(__callable_name, "callable_name") ObjectProxy = __wrap_api_call(__ObjectProxy, "ObjectProxy") +CallableObjectProxy = __wrap_api_call(__CallableObjectProxy, "CallableObjectProxy") wrap_object = __wrap_api_call(__wrap_object, "wrap_object") wrap_object_attribute = __wrap_api_call(__wrap_object_attribute, "wrap_object_attribute") resolve_path = __wrap_api_call(__resolve_path, "resolve_path") diff --git a/newrelic/api/application.py b/newrelic/api/application.py index e2e7be139f..ebc8356a76 100644 --- a/newrelic/api/application.py +++ b/newrelic/api/application.py @@ -22,7 +22,6 @@ class Application(object): - _lock = threading.Lock() _instances = {} @@ -162,9 +161,11 @@ def record_transaction(self, data): if self.active: self._agent.record_transaction(self._name, data) - def record_log_event(self, message, level=None, timestamp=None, priority=None): + def record_log_event(self, message, level=None, timestamp=None, attributes=None, priority=None): if self.active: - self._agent.record_log_event(self._name, message, level, timestamp, priority=priority) + self._agent.record_log_event( + self._name, message, level, timestamp, attributes=attributes, priority=priority + ) def normalize_name(self, name, rule_type="url"): if self.active: diff --git a/newrelic/api/asgi_application.py b/newrelic/api/asgi_application.py index 2e4e4979b3..475faa7cbb 100644 --- a/newrelic/api/asgi_application.py +++ b/newrelic/api/asgi_application.py @@ -97,7 +97,9 @@ def should_insert_html(self, headers): content_type = None for header_name, header_value in headers: - # assume header names are lower cased in accordance with ASGI spec + # ASGI spec (https://asgi.readthedocs.io/en/latest/specs/www.html#http) states + # header names should be lower cased, but not required + header_name = header_name.lower() if header_name == b"content-type": content_type = header_value elif header_name == b"content-encoding": @@ -155,16 +157,9 @@ async def send_inject_browser_agent(self, message): # if there's a valid body string, attempt to insert the HTML if verify_body_exists(self.body): - header = self.transaction.browser_timing_header() - if not header: - # If there's no header, abort browser monitoring injection - await self.send_buffered() - return - - footer = self.transaction.browser_timing_footer() - browser_agent_data = six.b(header) + six.b(footer) - - body = insert_html_snippet(self.body, lambda: browser_agent_data, self.search_maximum) + body = insert_html_snippet( + self.body, lambda: six.b(self.transaction.browser_timing_header()), self.search_maximum + ) # If we have inserted the browser agent if len(body) != len(self.body): @@ -318,7 +313,6 @@ async def nr_async_asgi(receive, send): send=send, source=wrapped, ) as transaction: - # Record details of framework against the transaction for later # reporting as supportability metrics. if framework: diff --git a/newrelic/api/cat_header_mixin.py b/newrelic/api/cat_header_mixin.py index fe5c0a71ff..b8251fdca1 100644 --- a/newrelic/api/cat_header_mixin.py +++ b/newrelic/api/cat_header_mixin.py @@ -22,6 +22,7 @@ class CatHeaderMixin(object): cat_transaction_key = 'X-NewRelic-Transaction' cat_appdata_key = 'X-NewRelic-App-Data' cat_synthetics_key = 'X-NewRelic-Synthetics' + cat_synthetics_info_key = 'X-NewRelic-Synthetics-Info' cat_metadata_key = 'x-newrelic-trace' cat_distributed_trace_key = 'newrelic' settings = None @@ -105,8 +106,9 @@ def generate_request_headers(cls, transaction): (cls.cat_transaction_key, encoded_transaction)) if transaction.synthetics_header: - nr_headers.append( - (cls.cat_synthetics_key, transaction.synthetics_header)) + nr_headers.append((cls.cat_synthetics_key, transaction.synthetics_header)) + if transaction.synthetics_info_header: + nr_headers.append((cls.cat_synthetics_info_key, transaction.synthetics_info_header)) return nr_headers diff --git a/newrelic/api/log.py b/newrelic/api/log.py index 846ef275ab..f74339f46b 100644 --- a/newrelic/api/log.py +++ b/newrelic/api/log.py @@ -21,9 +21,11 @@ from newrelic.api.time_trace import get_linking_metadata from newrelic.api.transaction import current_transaction, record_log_event from newrelic.common import agent_http +from newrelic.common.encoding_utils import json_encode from newrelic.common.object_names import parse_exc_info from newrelic.core.attribute import truncate from newrelic.core.config import global_settings, is_expected_error +from newrelic.packages import six def format_exc_info(exc_info): @@ -42,8 +44,30 @@ def format_exc_info(exc_info): return formatted +def safe_json_encode(obj, ignore_string_types=False, **kwargs): + # Performs the same operation as json_encode but replaces unserializable objects with a string containing their class name. + # If ignore_string_types is True, do not encode string types further. + # Currently used for safely encoding logging attributes. + + if ignore_string_types and isinstance(obj, (six.string_types, six.binary_type)): + return obj + + # Attempt to run through JSON serialization + try: + return json_encode(obj, **kwargs) + except Exception: + pass + + # If JSON serialization fails then return a repr + try: + return repr(obj) + except Exception: + # If repr fails then default to an unprinatable object name + return "" % type(obj).__name__ + + class NewRelicContextFormatter(Formatter): - DEFAULT_LOG_RECORD_KEYS = frozenset(vars(LogRecord("", 0, "", 0, "", (), None))) + DEFAULT_LOG_RECORD_KEYS = frozenset(set(vars(LogRecord("", 0, "", 0, "", (), None))) | {"message"}) def __init__(self, *args, **kwargs): super(NewRelicContextFormatter, self).__init__() @@ -76,17 +100,12 @@ def log_record_to_dict(cls, record): return output def format(self, record): - def safe_str(object, *args, **kwargs): - """Convert object to str, catching any errors raised.""" - try: - return str(object, *args, **kwargs) - except: - return "" % type(object).__name__ - - return json.dumps(self.log_record_to_dict(record), default=safe_str, separators=(",", ":")) + return json.dumps(self.log_record_to_dict(record), default=safe_json_encode, separators=(",", ":")) class NewRelicLogForwardingHandler(logging.Handler): + DEFAULT_LOG_RECORD_KEYS = frozenset(set(vars(LogRecord("", 0, "", 0, "", (), None))) | {"message"}) + def emit(self, record): try: # Avoid getting local log decorated message @@ -95,10 +114,20 @@ def emit(self, record): else: message = record.getMessage() - record_log_event(message, record.levelname, int(record.created * 1000)) + attrs = self.filter_record_attributes(record) + record_log_event(message, record.levelname, int(record.created * 1000), attributes=attrs) except Exception: self.handleError(record) + @classmethod + def filter_record_attributes(cls, record): + record_attrs = vars(record) + DEFAULT_LOG_RECORD_KEYS = cls.DEFAULT_LOG_RECORD_KEYS + if len(record_attrs) > len(DEFAULT_LOG_RECORD_KEYS): + return {k: v for k, v in six.iteritems(vars(record)) if k not in DEFAULT_LOG_RECORD_KEYS} + else: + return None + class NewRelicLogHandler(logging.Handler): """ @@ -126,8 +155,8 @@ def __init__( "The contributed NewRelicLogHandler has been superseded by automatic instrumentation for " "logging in the standard lib. If for some reason you need to manually configure a handler, " "please use newrelic.api.log.NewRelicLogForwardingHandler to take advantage of all the " - "features included in application log forwarding such as proper batching.", - DeprecationWarning + "features included in application log forwarding such as proper batching.", + DeprecationWarning, ) super(NewRelicLogHandler, self).__init__(level=level) self.license_key = license_key or self.settings.license_key diff --git a/newrelic/api/message_trace.py b/newrelic/api/message_trace.py index f564c41cb4..e0fa5956d0 100644 --- a/newrelic/api/message_trace.py +++ b/newrelic/api/message_trace.py @@ -27,6 +27,7 @@ class MessageTrace(CatHeaderMixin, TimeTrace): cat_transaction_key = "NewRelicTransaction" cat_appdata_key = "NewRelicAppData" cat_synthetics_key = "NewRelicSynthetics" + cat_synthetics_info_key = "NewRelicSyntheticsInfo" def __init__(self, library, operation, destination_type, destination_name, params=None, terminal=True, **kwargs): parent = kwargs.pop("parent", None) diff --git a/newrelic/api/solr_trace.py b/newrelic/api/solr_trace.py index e482158ee9..6907f20f8b 100644 --- a/newrelic/api/solr_trace.py +++ b/newrelic/api/solr_trace.py @@ -14,6 +14,7 @@ import newrelic.api.object_wrapper import newrelic.api.time_trace +import newrelic.common.object_wrapper import newrelic.core.solr_node @@ -111,4 +112,4 @@ def decorator(wrapped): def wrap_solr_trace(module, object_path, library, command): - newrelic.api.object_wrapper.wrap_object(module, object_path, SolrTraceWrapper, (library, command)) + newrelic.common.object_wrapper.wrap_object(module, object_path, SolrTraceWrapper, (library, command)) diff --git a/newrelic/api/transaction.py b/newrelic/api/transaction.py index 643a5db597..ff40a4ad81 100644 --- a/newrelic/api/transaction.py +++ b/newrelic/api/transaction.py @@ -44,6 +44,7 @@ json_decode, json_encode, obfuscate, + snake_case, ) from newrelic.core.attribute import ( MAX_ATTRIBUTE_LENGTH, @@ -53,6 +54,7 @@ create_attributes, create_user_attributes, process_user_attribute, + resolve_logging_context_attributes, truncate, ) from newrelic.core.attribute_filter import ( @@ -305,11 +307,18 @@ def __init__(self, application, enabled=None, source=None): self._alternate_path_hashes = {} self.is_part_of_cat = False + # Synthetics Header self.synthetics_resource_id = None self.synthetics_job_id = None self.synthetics_monitor_id = None self.synthetics_header = None + # Synthetics Info Header + self.synthetics_type = None + self.synthetics_initiator = None + self.synthetics_attributes = None + self.synthetics_info_header = None + self._custom_metrics = CustomMetrics() self._dimensional_metrics = DimensionalMetrics() @@ -609,6 +618,10 @@ def __exit__(self, exc, value, tb): synthetics_job_id=self.synthetics_job_id, synthetics_monitor_id=self.synthetics_monitor_id, synthetics_header=self.synthetics_header, + synthetics_type=self.synthetics_type, + synthetics_initiator=self.synthetics_initiator, + synthetics_attributes=self.synthetics_attributes, + synthetics_info_header=self.synthetics_info_header, is_part_of_cat=self.is_part_of_cat, trip_id=self.trip_id, path_hash=self.path_hash, @@ -846,6 +859,16 @@ def trace_intrinsics(self): i_attrs["synthetics_job_id"] = self.synthetics_job_id if self.synthetics_monitor_id: i_attrs["synthetics_monitor_id"] = self.synthetics_monitor_id + if self.synthetics_type: + i_attrs["synthetics_type"] = self.synthetics_type + if self.synthetics_initiator: + i_attrs["synthetics_initiator"] = self.synthetics_initiator + if self.synthetics_attributes: + # Add all synthetics attributes + for k, v in self.synthetics_attributes.items(): + if k: + i_attrs["synthetics_%s" % snake_case(k)] = v + if self.total_time: i_attrs["totalTime"] = self.total_time if self._loop_time: @@ -1508,7 +1531,7 @@ def set_transaction_name(self, name, group=None, priority=None): self._group = group self._name = name - def record_log_event(self, message, level=None, timestamp=None, priority=None): + def record_log_event(self, message, level=None, timestamp=None, attributes=None, priority=None): settings = self.settings if not ( settings @@ -1521,18 +1544,62 @@ def record_log_event(self, message, level=None, timestamp=None, priority=None): timestamp = timestamp if timestamp is not None else time.time() level = str(level) if level is not None else "UNKNOWN" + context_attributes = attributes # Name reassigned for clarity - if not message or message.isspace(): - _logger.debug("record_log_event called where message was missing. No log event will be sent.") - return + # Unpack message and attributes from dict inputs + if isinstance(message, dict): + message_attributes = {k: v for k, v in message.items() if k != "message"} + message = message.get("message", "") + else: + message_attributes = None - message = truncate(message, MAX_LOG_MESSAGE_LENGTH) + if message is not None: + # Coerce message into a string type + if not isinstance(message, six.string_types): + try: + message = str(message) + except Exception: + # Exit early for invalid message type after unpacking + _logger.debug( + "record_log_event called where message could not be converted to a string type. No log event will be sent." + ) + return + + # Truncate the now unpacked and string converted message + message = truncate(message, MAX_LOG_MESSAGE_LENGTH) + + # Collect attributes from linking metadata, context data, and message attributes + collected_attributes = {} + if settings and settings.application_logging.forwarding.context_data.enabled: + if context_attributes: + context_attributes = resolve_logging_context_attributes( + context_attributes, settings.attribute_filter, "context." + ) + if context_attributes: + collected_attributes.update(context_attributes) + + if message_attributes: + message_attributes = resolve_logging_context_attributes( + message_attributes, settings.attribute_filter, "message." + ) + if message_attributes: + collected_attributes.update(message_attributes) + + # Exit early if no message or attributes found after filtering + if (not message or message.isspace()) and not context_attributes and not message_attributes: + _logger.debug( + "record_log_event called where no message and no attributes were found. No log event will be sent." + ) + return + + # Finally, add in linking attributes after checking that there is a valid message or at least 1 attribute + collected_attributes.update(get_linking_metadata()) event = LogEventNode( timestamp=timestamp, level=level, message=message, - attributes=get_linking_metadata(), + attributes=collected_attributes, ) self._log_events.add(event, priority=priority) @@ -1892,17 +1959,18 @@ def add_framework_info(name, version=None): transaction.add_framework_info(name, version) -def get_browser_timing_header(): +def get_browser_timing_header(nonce=None): transaction = current_transaction() if transaction and hasattr(transaction, "browser_timing_header"): - return transaction.browser_timing_header() + return transaction.browser_timing_header(nonce) return "" -def get_browser_timing_footer(): - transaction = current_transaction() - if transaction and hasattr(transaction, "browser_timing_footer"): - return transaction.browser_timing_footer() +def get_browser_timing_footer(nonce=None): + warnings.warn( + "The get_browser_timing_footer function is deprecated. Please migrate to only using the get_browser_timing_header API instead.", + DeprecationWarning, + ) return "" @@ -2049,7 +2117,7 @@ def record_ml_event(event_type, params, application=None): application.record_ml_event(event_type, params) -def record_log_event(message, level=None, timestamp=None, application=None, priority=None): +def record_log_event(message, level=None, timestamp=None, attributes=None, application=None, priority=None): """Record a log event. Args: @@ -2060,12 +2128,12 @@ def record_log_event(message, level=None, timestamp=None, application=None, prio if application is None: transaction = current_transaction() if transaction: - transaction.record_log_event(message, level, timestamp) + transaction.record_log_event(message, level, timestamp, attributes=attributes) else: application = application_instance(activate=False) if application and application.enabled: - application.record_log_event(message, level, timestamp, priority=priority) + application.record_log_event(message, level, timestamp, attributes=attributes, priority=priority) else: _logger.debug( "record_log_event has been called but no transaction or application was running. As a result, " @@ -2076,7 +2144,7 @@ def record_log_event(message, level=None, timestamp=None, application=None, prio timestamp, ) elif application.enabled: - application.record_log_event(message, level, timestamp, priority=priority) + application.record_log_event(message, level, timestamp, attributes=attributes, priority=priority) def accept_distributed_trace_payload(payload, transport_type="HTTP"): diff --git a/newrelic/api/web_transaction.py b/newrelic/api/web_transaction.py index 9749e26194..5bedce8eaa 100644 --- a/newrelic/api/web_transaction.py +++ b/newrelic/api/web_transaction.py @@ -13,8 +13,8 @@ # limitations under the License. import functools -import time import logging +import time import warnings try: @@ -24,24 +24,21 @@ from newrelic.api.application import Application, application_instance from newrelic.api.transaction import Transaction, current_transaction - -from newrelic.common.async_proxy import async_proxy, TransactionContext -from newrelic.common.encoding_utils import (obfuscate, json_encode, - decode_newrelic_header, ensure_str) - -from newrelic.core.attribute import create_attributes, process_user_attribute -from newrelic.core.attribute_filter import DST_BROWSER_MONITORING, DST_NONE - -from newrelic.packages import six - +from newrelic.common.async_proxy import TransactionContext, async_proxy +from newrelic.common.encoding_utils import ( + decode_newrelic_header, + ensure_str, + json_encode, + obfuscate, +) from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import FunctionWrapper, wrap_object +from newrelic.core.attribute_filter import DST_BROWSER_MONITORING +from newrelic.packages import six _logger = logging.getLogger(__name__) -_js_agent_header_fragment = '' -_js_agent_footer_fragment = '' +_js_agent_header_fragment = '' # Seconds since epoch for Jan 1 2000 JAN_1_2000 = time.mktime((2000, 1, 1, 0, 0, 0, 0, 0, 0)) @@ -81,8 +78,8 @@ def _parse_time_stamp(time_stamp): return converted_time -TRUE_VALUES = {'on', 'true', '1'} -FALSE_VALUES = {'off', 'false', '0'} +TRUE_VALUES = {"on", "true", "1"} +FALSE_VALUES = {"off", "false", "0"} def _lookup_environ_setting(environ, name, default=False): @@ -114,43 +111,78 @@ def _parse_synthetics_header(header): version = int(header[0]) if version == 1: - synthetics['version'] = version - synthetics['account_id'] = int(header[1]) - synthetics['resource_id'] = header[2] - synthetics['job_id'] = header[3] - synthetics['monitor_id'] = header[4] + synthetics["version"] = version + synthetics["account_id"] = int(header[1]) + synthetics["resource_id"] = header[2] + synthetics["job_id"] = header[3] + synthetics["monitor_id"] = header[4] except Exception: return return synthetics +def _parse_synthetics_info_header(header): + # Return a dictionary of values from SyntheticsInfo header + # Returns empty dict, if version is not supported. + + synthetics_info = {} + version = None + + try: + version = int(header.get("version")) + + if version == 1: + synthetics_info["version"] = version + synthetics_info["type"] = header.get("type") + synthetics_info["initiator"] = header.get("initiator") + synthetics_info["attributes"] = header.get("attributes") + except Exception: + return + + return synthetics_info + + def _remove_query_string(url): url = ensure_str(url) out = urlparse.urlsplit(url) - return urlparse.urlunsplit((out.scheme, out.netloc, out.path, '', '')) + return urlparse.urlunsplit((out.scheme, out.netloc, out.path, "", "")) def _is_websocket(environ): - return environ.get('HTTP_UPGRADE', '').lower() == 'websocket' + return environ.get("HTTP_UPGRADE", "").lower() == "websocket" -class WebTransaction(Transaction): - unicode_error_reported = False - QUEUE_TIME_HEADERS = ('x-request-start', 'x-queue-start') +def _encode_nonce(nonce): + if not nonce: + return "" + else: + return ' nonce="%s"' % ensure_str(nonce) # Extra space intentional - def __init__(self, application, name, group=None, - scheme=None, host=None, port=None, request_method=None, - request_path=None, query_string=None, headers=None, - enabled=None, source=None): +class WebTransaction(Transaction): + unicode_error_reported = False + QUEUE_TIME_HEADERS = ("x-request-start", "x-queue-start") + + def __init__( + self, + application, + name, + group=None, + scheme=None, + host=None, + port=None, + request_method=None, + request_path=None, + query_string=None, + headers=None, + enabled=None, + source=None, + ): super(WebTransaction, self).__init__(application, enabled, source=source) - # Flags for tracking whether RUM header and footer have been - # generated. - + # Flag for tracking whether RUM header has been generated. self.rum_header_generated = False - self.rum_footer_generated = False if not self.enabled: return @@ -188,9 +220,7 @@ def __init__(self, application, name, group=None, if query_string and not self._settings.high_security: query_string = ensure_str(query_string) try: - params = urlparse.parse_qs( - query_string, - keep_blank_values=True) + params = urlparse.parse_qs(query_string, keep_blank_values=True) self._request_params.update(params) except Exception: pass @@ -202,7 +232,7 @@ def __init__(self, application, name, group=None, if name is not None: self.set_transaction_name(name, group, priority=1) elif request_path is not None: - self.set_transaction_name(request_path, 'Uri', priority=1) + self.set_transaction_name(request_path, "Uri", priority=1) def _process_queue_time(self): for queue_time_header in self.QUEUE_TIME_HEADERS: @@ -212,7 +242,7 @@ def _process_queue_time(self): value = ensure_str(value) try: - if value.startswith('t='): + if value.startswith("t="): self.queue_start = _parse_time_stamp(float(value[2:])) else: self.queue_start = _parse_time_stamp(float(value)) @@ -227,31 +257,37 @@ def _process_synthetics_header(self): settings = self._settings - if settings.synthetics.enabled and \ - settings.trusted_account_ids and \ - settings.encoding_key: - - encoded_header = self._request_headers.get('x-newrelic-synthetics') + if settings.synthetics.enabled and settings.trusted_account_ids and settings.encoding_key: + # Synthetics Header + encoded_header = self._request_headers.get("x-newrelic-synthetics") encoded_header = encoded_header and ensure_str(encoded_header) if not encoded_header: return - decoded_header = decode_newrelic_header( - encoded_header, - settings.encoding_key) + decoded_header = decode_newrelic_header(encoded_header, settings.encoding_key) synthetics = _parse_synthetics_header(decoded_header) - if synthetics and \ - synthetics['account_id'] in \ - settings.trusted_account_ids: + # Synthetics Info Header + encoded_info_header = self._request_headers.get("x-newrelic-synthetics-info") + encoded_info_header = encoded_info_header and ensure_str(encoded_info_header) - # Save obfuscated header, because we will pass it along + decoded_info_header = decode_newrelic_header(encoded_info_header, settings.encoding_key) + synthetics_info = _parse_synthetics_info_header(decoded_info_header) + + if synthetics and synthetics["account_id"] in settings.trusted_account_ids: + # Save obfuscated headers, because we will pass them along # unchanged in all external requests. self.synthetics_header = encoded_header - self.synthetics_resource_id = synthetics['resource_id'] - self.synthetics_job_id = synthetics['job_id'] - self.synthetics_monitor_id = synthetics['monitor_id'] + self.synthetics_resource_id = synthetics["resource_id"] + self.synthetics_job_id = synthetics["job_id"] + self.synthetics_monitor_id = synthetics["monitor_id"] + + if synthetics_info: + self.synthetics_info_header = encoded_info_header + self.synthetics_type = synthetics_info["type"] + self.synthetics_initiator = synthetics_info["initiator"] + self.synthetics_attributes = synthetics_info["attributes"] def _process_context_headers(self): # Process the New Relic cross process ID header and extract @@ -259,11 +295,9 @@ def _process_context_headers(self): if self._settings.distributed_tracing.enabled: self.accept_distributed_trace_headers(self._request_headers) else: - client_cross_process_id = \ - self._request_headers.get('x-newrelic-id') - txn_header = self._request_headers.get('x-newrelic-transaction') - self._process_incoming_cat_headers(client_cross_process_id, - txn_header) + client_cross_process_id = self._request_headers.get("x-newrelic-id") + txn_header = self._request_headers.get("x-newrelic-transaction") + self._process_incoming_cat_headers(client_cross_process_id, txn_header) def process_response(self, status_code, response_headers): """Processes response status and headers, extracting any @@ -302,54 +336,45 @@ def process_response(self, status_code, response_headers): # Generate CAT response headers try: - read_length = int(self._request_headers.get('content-length')) + read_length = int(self._request_headers.get("content-length")) except Exception: read_length = -1 return self._generate_response_headers(read_length) def _update_agent_attributes(self): - if 'accept' in self._request_headers: - self._add_agent_attribute('request.headers.accept', - self._request_headers['accept']) + if "accept" in self._request_headers: + self._add_agent_attribute("request.headers.accept", self._request_headers["accept"]) try: - content_length = int(self._request_headers['content-length']) - self._add_agent_attribute('request.headers.contentLength', - content_length) + content_length = int(self._request_headers["content-length"]) + self._add_agent_attribute("request.headers.contentLength", content_length) except: pass - if 'content-type' in self._request_headers: - self._add_agent_attribute('request.headers.contentType', - self._request_headers['content-type']) - if 'host' in self._request_headers: - self._add_agent_attribute('request.headers.host', - self._request_headers['host']) - if 'referer' in self._request_headers: - self._add_agent_attribute('request.headers.referer', - _remove_query_string(self._request_headers['referer'])) - if 'user-agent' in self._request_headers: - self._add_agent_attribute('request.headers.userAgent', - self._request_headers['user-agent']) + if "content-type" in self._request_headers: + self._add_agent_attribute("request.headers.contentType", self._request_headers["content-type"]) + if "host" in self._request_headers: + self._add_agent_attribute("request.headers.host", self._request_headers["host"]) + if "referer" in self._request_headers: + self._add_agent_attribute("request.headers.referer", _remove_query_string(self._request_headers["referer"])) + if "user-agent" in self._request_headers: + self._add_agent_attribute("request.headers.userAgent", self._request_headers["user-agent"]) if self._request_method: - self._add_agent_attribute('request.method', self._request_method) + self._add_agent_attribute("request.method", self._request_method) if self._request_uri: - self._add_agent_attribute('request.uri', self._request_uri) + self._add_agent_attribute("request.uri", self._request_uri) try: - content_length = int(self._response_headers['content-length']) - self._add_agent_attribute('response.headers.contentLength', - content_length) + content_length = int(self._response_headers["content-length"]) + self._add_agent_attribute("response.headers.contentLength", content_length) except: pass - if 'content-type' in self._response_headers: - self._add_agent_attribute('response.headers.contentType', - self._response_headers['content-type']) + if "content-type" in self._response_headers: + self._add_agent_attribute("response.headers.contentType", self._response_headers["content-type"]) if self._response_code is not None: - self._add_agent_attribute('response.status', - str(self._response_code)) + self._add_agent_attribute("response.status", str(self._response_code)) return super(WebTransaction, self)._update_agent_attributes() - def browser_timing_header(self): + def browser_timing_header(self, nonce=None): """Returns the JavaScript header to be included in any HTML response to perform real user monitoring. This function returns the header as a native Python string. In Python 2 native strings @@ -359,39 +384,39 @@ def browser_timing_header(self): """ if not self.enabled: - return '' + return "" if self._state != self.STATE_RUNNING: - return '' + return "" if self.background_task: - return '' + return "" if self.ignore_transaction: - return '' + return "" if not self._settings: - return '' + return "" if not self._settings.browser_monitoring.enabled: - return '' + return "" if not self._settings.license_key: - return '' + return "" # Don't return the header a second time if it has already # been generated. if self.rum_header_generated: - return '' + return "" # Requirement is that the first 13 characters of the account # license key is used as the key when obfuscating values for - # the RUM footer. Will not be able to perform the obfuscation + # the RUM configuration. Will not be able to perform the obfuscation # if license key isn't that long for some reason. if len(self._settings.license_key) < 13: - return '' + return "" # Return the RUM header only if the agent received a valid value # for js_agent_loader from the data collector. The data @@ -400,7 +425,48 @@ def browser_timing_header(self): # 'none'. if self._settings.js_agent_loader: - header = _js_agent_header_fragment % self._settings.js_agent_loader + # Make sure we freeze the path. + + self._freeze_path() + + # When obfuscating values for the browser agent configuration, we only use the + # first 13 characters of the account license key. + + obfuscation_key = self._settings.license_key[:13] + + attributes = {} + + user_attributes = {} + for attr in self.user_attributes: + if attr.destinations & DST_BROWSER_MONITORING: + user_attributes[attr.name] = attr.value + + if user_attributes: + attributes["u"] = user_attributes + + request_parameters = self.request_parameters + request_parameter_attributes = self.filter_request_parameters(request_parameters) + agent_attributes = {} + for attr in request_parameter_attributes: + if attr.destinations & DST_BROWSER_MONITORING: + agent_attributes[attr.name] = attr.value + + if agent_attributes: + attributes["a"] = agent_attributes + + # create the data structure that pull all our data in + + broswer_agent_configuration = self.browser_monitoring_intrinsics(obfuscation_key) + + if attributes: + attributes = obfuscate(json_encode(attributes), obfuscation_key) + broswer_agent_configuration["atts"] = attributes + + header = _js_agent_header_fragment % ( + _encode_nonce(nonce), + json_encode(broswer_agent_configuration), + self._settings.js_agent_loader, + ) # To avoid any issues with browser encodings, we will make sure # that the javascript we inject for the browser agent is ASCII @@ -414,128 +480,35 @@ def browser_timing_header(self): try: if six.PY2: - header = header.encode('ascii') + header = header.encode("ascii") else: - header.encode('ascii') + header.encode("ascii") except UnicodeError: if not WebTransaction.unicode_error_reported: - _logger.error('ASCII encoding of js-agent-header failed.', - header) + _logger.error("ASCII encoding of js-agent-header failed.", header) WebTransaction.unicode_error_reported = True - header = '' + header = "" else: - header = '' + header = "" # We remember if we have returned a non empty string value and - # if called a second time we will not return it again. The flag - # will also be used to check whether the footer should be - # generated. + # if called a second time we will not return it again. if header: self.rum_header_generated = True return header - def browser_timing_footer(self): - """Returns the JavaScript footer to be included in any HTML - response to perform real user monitoring. This function returns - the footer as a native Python string. In Python 2 native strings - are stored as bytes. In Python 3 native strings are stored as - unicode. - - """ - - if not self.enabled: - return '' - - if self._state != self.STATE_RUNNING: - return '' - - if self.ignore_transaction: - return '' - - # Only generate a footer if the header had already been - # generated and we haven't already generated the footer. - - if not self.rum_header_generated: - return '' - - if self.rum_footer_generated: - return '' - - # Make sure we freeze the path. - - self._freeze_path() - - # When obfuscating values for the footer, we only use the - # first 13 characters of the account license key. - - obfuscation_key = self._settings.license_key[:13] - - attributes = {} - - user_attributes = {} - for attr in self.user_attributes: - if attr.destinations & DST_BROWSER_MONITORING: - user_attributes[attr.name] = attr.value - - if user_attributes: - attributes['u'] = user_attributes - - request_parameters = self.request_parameters - request_parameter_attributes = self.filter_request_parameters( - request_parameters) - agent_attributes = {} - for attr in request_parameter_attributes: - if attr.destinations & DST_BROWSER_MONITORING: - agent_attributes[attr.name] = attr.value - - if agent_attributes: - attributes['a'] = agent_attributes - - # create the data structure that pull all our data in - - footer_data = self.browser_monitoring_intrinsics(obfuscation_key) - - if attributes: - attributes = obfuscate(json_encode(attributes), obfuscation_key) - footer_data['atts'] = attributes - - footer = _js_agent_footer_fragment % json_encode(footer_data) - - # To avoid any issues with browser encodings, we will make sure that - # the javascript we inject for the browser agent is ASCII encodable. - # Since we obfuscate all agent and user attributes, and the transaction - # name with base 64 encoding, this will preserve those strings, if - # they have values outside of the ASCII character set. - # In the case of Python 2, we actually then use the encoded value - # as we need a native string, which for Python 2 is a byte string. - # If encoding as ASCII fails we will return an empty string. - - try: - if six.PY2: - footer = footer.encode('ascii') - else: - footer.encode('ascii') - - except UnicodeError: - if not WebTransaction.unicode_error_reported: - _logger.error('ASCII encoding of js-agent-footer failed.', - footer) - WebTransaction.unicode_error_reported = True - - footer = '' - - # We remember if we have returned a non empty string value and - # if called a second time we will not return it again. - - if footer: - self.rum_footer_generated = True - - return footer + def browser_timing_footer(self, nonce=None): + """Deprecated API that has been replaced entirely by browser_timing_header().""" + warnings.warn( + "The browser_timing_footer function is deprecated. Please migrate to only using the browser_timing_header api instead.", + DeprecationWarning, + ) + return "" def browser_monitoring_intrinsics(self, obfuscation_key): txn_name = obfuscate(self.path, obfuscation_key) @@ -560,7 +533,7 @@ def browser_monitoring_intrinsics(self, obfuscation_key): if self._settings.browser_monitoring.ssl_for_http is not None: ssl_for_http = self._settings.browser_monitoring.ssl_for_http - intrinsics['sslForHttp'] = ssl_for_http + intrinsics["sslForHttp"] = ssl_for_http return intrinsics @@ -573,16 +546,16 @@ def __init__(self, environ): @staticmethod def _to_wsgi(key): key = key.upper() - if key == 'CONTENT-LENGTH': - return 'CONTENT_LENGTH' - elif key == 'CONTENT-TYPE': - return 'CONTENT_TYPE' - return 'HTTP_' + key.replace('-', '_') + if key == "CONTENT-LENGTH": + return "CONTENT_LENGTH" + elif key == "CONTENT-TYPE": + return "CONTENT_TYPE" + return "HTTP_" + key.replace("-", "_") @staticmethod def _from_wsgi(key): key = key.lower() - return key[5:].replace('_', '-') + return key[5:].replace("_", "-") def __getitem__(self, key): wsgi_key = self._to_wsgi(key) @@ -590,14 +563,14 @@ def __getitem__(self, key): def __iter__(self): for key in self.environ: - if key == 'CONTENT_LENGTH': - yield 'content-length', self.environ['CONTENT_LENGTH'] - elif key == 'CONTENT_TYPE': - yield 'content-type', self.environ['CONTENT_TYPE'] - elif key == 'HTTP_CONTENT_LENGTH' or key == 'HTTP_CONTENT_TYPE': + if key == "CONTENT_LENGTH": + yield "content-length", self.environ["CONTENT_LENGTH"] + elif key == "CONTENT_TYPE": + yield "content-type", self.environ["CONTENT_TYPE"] + elif key == "HTTP_CONTENT_LENGTH" or key == "HTTP_CONTENT_TYPE": # These keys are illegal and should be ignored continue - elif key.startswith('HTTP_'): + elif key.startswith("HTTP_"): yield self._from_wsgi(key), self.environ[key] def __len__(self): @@ -607,11 +580,9 @@ def __len__(self): class WSGIWebTransaction(WebTransaction): - - MOD_WSGI_HEADERS = ('mod_wsgi.request_start', 'mod_wsgi.queue_start') + MOD_WSGI_HEADERS = ("mod_wsgi.request_start", "mod_wsgi.queue_start") def __init__(self, application, environ, source=None): - # The web transaction can be enabled/disabled by # the value of the variable "newrelic.enabled" # in the WSGI environ dictionary. We need to check @@ -621,17 +592,20 @@ def __init__(self, application, environ, source=None): # base class making the decision based on whether # application or agent as a whole are enabled. - enabled = _lookup_environ_setting(environ, - 'newrelic.enabled', None) + enabled = _lookup_environ_setting(environ, "newrelic.enabled", None) # Initialise the common transaction base class. super(WSGIWebTransaction, self).__init__( - application, name=None, port=environ.get('SERVER_PORT'), - request_method=environ.get('REQUEST_METHOD'), - query_string=environ.get('QUERY_STRING'), + application, + name=None, + port=environ.get("SERVER_PORT"), + request_method=environ.get("REQUEST_METHOD"), + query_string=environ.get("QUERY_STRING"), headers=iter(WSGIHeaderProxy(environ)), - enabled=enabled, source=source) + enabled=enabled, + source=source, + ) # Disable transactions for websocket connections. # Also disable autorum if this is a websocket. This is a good idea for @@ -656,21 +630,17 @@ def __init__(self, application, environ, source=None): # Check for override settings from WSGI environ. - self.background_task = _lookup_environ_setting(environ, - 'newrelic.set_background_task', False) - - self.ignore_transaction = _lookup_environ_setting(environ, - 'newrelic.ignore_transaction', False) - self.suppress_apdex = _lookup_environ_setting(environ, - 'newrelic.suppress_apdex_metric', False) - self.suppress_transaction_trace = _lookup_environ_setting(environ, - 'newrelic.suppress_transaction_trace', False) - self.capture_params = _lookup_environ_setting(environ, - 'newrelic.capture_request_params', - settings.capture_params) - self.autorum_disabled = _lookup_environ_setting(environ, - 'newrelic.disable_browser_autorum', - not settings.browser_monitoring.auto_instrument) + self.background_task = _lookup_environ_setting(environ, "newrelic.set_background_task", False) + + self.ignore_transaction = _lookup_environ_setting(environ, "newrelic.ignore_transaction", False) + self.suppress_apdex = _lookup_environ_setting(environ, "newrelic.suppress_apdex_metric", False) + self.suppress_transaction_trace = _lookup_environ_setting(environ, "newrelic.suppress_transaction_trace", False) + self.capture_params = _lookup_environ_setting( + environ, "newrelic.capture_request_params", settings.capture_params + ) + self.autorum_disabled = _lookup_environ_setting( + environ, "newrelic.disable_browser_autorum", not settings.browser_monitoring.auto_instrument + ) # Make sure that if high security mode is enabled that # capture of request params is still being disabled. @@ -697,17 +667,17 @@ def __init__(self, application, environ, source=None): # due to use of REST style URL concepts or # otherwise. - request_uri = environ.get('REQUEST_URI', None) + request_uri = environ.get("REQUEST_URI", None) if request_uri is None: # The gunicorn WSGI server uses RAW_URI instead # of the more typical REQUEST_URI used by Apache # and other web servers. - request_uri = environ.get('RAW_URI', None) + request_uri = environ.get("RAW_URI", None) - script_name = environ.get('SCRIPT_NAME', None) - path_info = environ.get('PATH_INFO', None) + script_name = environ.get("SCRIPT_NAME", None) + path_info = environ.get("PATH_INFO", None) self._request_uri = request_uri @@ -728,13 +698,13 @@ def __init__(self, application, environ, source=None): else: path = script_name + path_info - self.set_transaction_name(path, 'Uri', priority=1) + self.set_transaction_name(path, "Uri", priority=1) if self._request_uri is None: self._request_uri = path else: if self._request_uri is not None: - self.set_transaction_name(self._request_uri, 'Uri', priority=1) + self.set_transaction_name(self._request_uri, "Uri", priority=1) # mod_wsgi sets its own distinct variables for queue time # automatically. Initially it set mod_wsgi.queue_start, @@ -758,7 +728,7 @@ def __init__(self, application, environ, source=None): continue try: - if value.startswith('t='): + if value.startswith("t="): try: self.queue_start = _parse_time_stamp(float(value[2:])) except Exception: @@ -773,58 +743,40 @@ def __init__(self, application, environ, source=None): pass def __exit__(self, exc, value, tb): - self.record_custom_metric('Python/WSGI/Input/Bytes', - self._bytes_read) - self.record_custom_metric('Python/WSGI/Input/Time', - self.read_duration) - self.record_custom_metric('Python/WSGI/Input/Calls/read', - self._calls_read) - self.record_custom_metric('Python/WSGI/Input/Calls/readline', - self._calls_readline) - self.record_custom_metric('Python/WSGI/Input/Calls/readlines', - self._calls_readlines) - - self.record_custom_metric('Python/WSGI/Output/Bytes', - self._bytes_sent) - self.record_custom_metric('Python/WSGI/Output/Time', - self.sent_duration) - self.record_custom_metric('Python/WSGI/Output/Calls/yield', - self._calls_yield) - self.record_custom_metric('Python/WSGI/Output/Calls/write', - self._calls_write) + self.record_custom_metric("Python/WSGI/Input/Bytes", self._bytes_read) + self.record_custom_metric("Python/WSGI/Input/Time", self.read_duration) + self.record_custom_metric("Python/WSGI/Input/Calls/read", self._calls_read) + self.record_custom_metric("Python/WSGI/Input/Calls/readline", self._calls_readline) + self.record_custom_metric("Python/WSGI/Input/Calls/readlines", self._calls_readlines) + + self.record_custom_metric("Python/WSGI/Output/Bytes", self._bytes_sent) + self.record_custom_metric("Python/WSGI/Output/Time", self.sent_duration) + self.record_custom_metric("Python/WSGI/Output/Calls/yield", self._calls_yield) + self.record_custom_metric("Python/WSGI/Output/Calls/write", self._calls_write) return super(WSGIWebTransaction, self).__exit__(exc, value, tb) def _update_agent_attributes(self): # Add WSGI agent attributes if self.read_duration != 0: - self._add_agent_attribute('wsgi.input.seconds', - self.read_duration) + self._add_agent_attribute("wsgi.input.seconds", self.read_duration) if self._bytes_read != 0: - self._add_agent_attribute('wsgi.input.bytes', - self._bytes_read) + self._add_agent_attribute("wsgi.input.bytes", self._bytes_read) if self._calls_read != 0: - self._add_agent_attribute('wsgi.input.calls.read', - self._calls_read) + self._add_agent_attribute("wsgi.input.calls.read", self._calls_read) if self._calls_readline != 0: - self._add_agent_attribute('wsgi.input.calls.readline', - self._calls_readline) + self._add_agent_attribute("wsgi.input.calls.readline", self._calls_readline) if self._calls_readlines != 0: - self._add_agent_attribute('wsgi.input.calls.readlines', - self._calls_readlines) + self._add_agent_attribute("wsgi.input.calls.readlines", self._calls_readlines) if self.sent_duration != 0: - self._add_agent_attribute('wsgi.output.seconds', - self.sent_duration) + self._add_agent_attribute("wsgi.output.seconds", self.sent_duration) if self._bytes_sent != 0: - self._add_agent_attribute('wsgi.output.bytes', - self._bytes_sent) + self._add_agent_attribute("wsgi.output.bytes", self._bytes_sent) if self._calls_write != 0: - self._add_agent_attribute('wsgi.output.calls.write', - self._calls_write) + self._add_agent_attribute("wsgi.output.calls.write", self._calls_write) if self._calls_yield != 0: - self._add_agent_attribute('wsgi.output.calls.yield', - self._calls_yield) + self._add_agent_attribute("wsgi.output.calls.yield", self._calls_yield) return super(WSGIWebTransaction, self)._update_agent_attributes() @@ -842,20 +794,28 @@ def process_response(self, status, response_headers, *args): # would raise as a 500 for WSGI applications). try: - status = status.split(' ', 1)[0] + status = status.split(" ", 1)[0] except Exception: status = None - return super(WSGIWebTransaction, self).process_response( - status, response_headers) - - -def WebTransactionWrapper(wrapped, application=None, name=None, group=None, - scheme=None, host=None, port=None, request_method=None, - request_path=None, query_string=None, headers=None, source=None): - + return super(WSGIWebTransaction, self).process_response(status, response_headers) + + +def WebTransactionWrapper( + wrapped, + application=None, + name=None, + group=None, + scheme=None, + host=None, + port=None, + request_method=None, + request_path=None, + query_string=None, + headers=None, + source=None, +): def wrapper(wrapped, instance, args, kwargs): - if type(application) != Application: _application = application_instance(application) else: @@ -935,7 +895,6 @@ def wrapper(wrapped, instance, args, kwargs): else: _headers = headers - proxy = async_proxy(wrapped) source_arg = source or wrapped @@ -943,17 +902,37 @@ def wrapper(wrapped, instance, args, kwargs): def create_transaction(transaction): if transaction: return None - return WebTransaction( _application, _name, _group, - _scheme, _host, _port, _request_method, - _request_path, _query_string, _headers, source=source_arg) + return WebTransaction( + _application, + _name, + _group, + _scheme, + _host, + _port, + _request_method, + _request_path, + _query_string, + _headers, + source=source_arg, + ) if proxy: context_manager = TransactionContext(create_transaction) return proxy(wrapped(*args, **kwargs), context_manager) transaction = WebTransaction( - _application, _name, _group, _scheme, _host, _port, - _request_method, _request_path, _query_string, _headers, source=source_arg) + _application, + _name, + _group, + _scheme, + _host, + _port, + _request_method, + _request_path, + _query_string, + _headers, + source=source_arg, + ) transaction = create_transaction(current_transaction(active_only=False)) @@ -966,22 +945,50 @@ def create_transaction(transaction): return FunctionWrapper(wrapped, wrapper) -def web_transaction(application=None, name=None, group=None, - scheme=None, host=None, port=None, request_method=None, - request_path=None, query_string=None, headers=None): - - return functools.partial(WebTransactionWrapper, - application=application, name=name, group=group, - scheme=scheme, host=host, port=port, request_method=request_method, - request_path=request_path, query_string=query_string, - headers=headers) - - -def wrap_web_transaction(module, object_path, - application=None, name=None, group=None, - scheme=None, host=None, port=None, request_method=None, - request_path=None, query_string=None, headers=None): - - return wrap_object(module, object_path, WebTransactionWrapper, - (application, name, group, scheme, host, port, request_method, - request_path, query_string, headers)) +def web_transaction( + application=None, + name=None, + group=None, + scheme=None, + host=None, + port=None, + request_method=None, + request_path=None, + query_string=None, + headers=None, +): + return functools.partial( + WebTransactionWrapper, + application=application, + name=name, + group=group, + scheme=scheme, + host=host, + port=port, + request_method=request_method, + request_path=request_path, + query_string=query_string, + headers=headers, + ) + + +def wrap_web_transaction( + module, + object_path, + application=None, + name=None, + group=None, + scheme=None, + host=None, + port=None, + request_method=None, + request_path=None, + query_string=None, + headers=None, +): + return wrap_object( + module, + object_path, + WebTransactionWrapper, + (application, name, group, scheme, host, port, request_method, request_path, query_string, headers), + ) diff --git a/newrelic/api/wsgi_application.py b/newrelic/api/wsgi_application.py index 67338cbddd..5d12e94f30 100644 --- a/newrelic/api/wsgi_application.py +++ b/newrelic/api/wsgi_application.py @@ -78,7 +78,6 @@ def close(self): try: with FunctionTrace(name="Finalize", group="Python/WSGI"): - if isinstance(self.generator, _WSGIApplicationMiddleware): self.generator.close() @@ -153,7 +152,6 @@ def readlines(self, *args, **kwargs): class _WSGIApplicationMiddleware(object): - # This is a WSGI middleware for automatically inserting RUM into # HTML responses. It only works for where a WSGI application is # returning response content via a iterable/generator. It does not @@ -204,16 +202,7 @@ def process_data(self, data): # works then we are done, else we move to next phase of # buffering up content until we find the body element. - def html_to_be_inserted(): - header = self.transaction.browser_timing_header() - - if not header: - return b"" - - footer = self.transaction.browser_timing_footer() - - return six.b(header) + six.b(footer) - + html_to_be_inserted = lambda: six.b(self.transaction.browser_timing_header()) if not self.response_data: modified = insert_html_snippet(data, html_to_be_inserted) @@ -340,7 +329,6 @@ def start_response(self, status, response_headers, *args): # Also check whether RUM insertion has already occurred. if self.transaction.autorum_disabled or self.transaction.rum_header_generated: - self.flush_headers() self.pass_through = True @@ -360,7 +348,7 @@ def start_response(self, status, response_headers, *args): content_encoding = None content_disposition = None - for (name, value) in response_headers: + for name, value in response_headers: _name = name.lower() if _name == "content-length": @@ -508,7 +496,6 @@ def __iter__(self): def WSGIApplicationWrapper(wrapped, application=None, name=None, group=None, framework=None, dispatcher=None): - # Python 2 does not allow rebinding nonlocal variables, so to fix this # framework must be stored in list so it can be edited by closure. _framework = [framework] @@ -649,7 +636,6 @@ def _args(environ, start_response, *args, **kwargs): transaction.set_transaction_name(name, group, priority=1) def _start_response(status, response_headers, *args): - additional_headers = transaction.process_response(status, response_headers, *args) _write = start_response(status, response_headers + additional_headers, *args) diff --git a/newrelic/common/agent_http.py b/newrelic/common/agent_http.py index 89876a60c7..0e1fa682be 100644 --- a/newrelic/common/agent_http.py +++ b/newrelic/common/agent_http.py @@ -23,7 +23,11 @@ import newrelic.packages.urllib3 as urllib3 from newrelic import version from newrelic.common import certs -from newrelic.common.encoding_utils import json_decode, json_encode +from newrelic.common.encoding_utils import ( + json_decode, + json_encode, + obfuscate_license_key, +) from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import patch_function_wrapper from newrelic.core.internal_metrics import internal_count_metric, internal_metric @@ -41,6 +45,9 @@ def get_default_verify_paths(): return _DEFAULT_CERT_PATH +HEADER_AUDIT_LOGGING_DENYLIST = frozenset(("x-api-key", "api-key")) + + # User agent string that must be used in all requests. The data collector # does not rely on this, but is used to target specific agents if there # is a problem with data collector handling requests. @@ -119,6 +126,14 @@ def log_request(cls, fp, method, url, params, payload, headers, body=None, compr if not fp: return + # Obfuscate license key from headers and URL params + if headers: + headers = {k: obfuscate_license_key(v) if k.lower() in HEADER_AUDIT_LOGGING_DENYLIST else v for k, v in headers.items()} + + if params and "license_key" in params: + params = params.copy() + params["license_key"] = obfuscate_license_key(params["license_key"]) + # Maintain a global AUDIT_LOG_ID attached to all class instances # NOTE: this is not thread safe so this class cannot be used # across threads when audit logging is on diff --git a/newrelic/common/encoding_utils.py b/newrelic/common/encoding_utils.py index ef8624240f..41ffb1dfa7 100644 --- a/newrelic/common/encoding_utils.py +++ b/newrelic/common/encoding_utils.py @@ -31,14 +31,14 @@ from newrelic.packages import six -HEXDIGLC_RE = re.compile('^[0-9a-f]+$') -DELIMITER_FORMAT_RE = re.compile('[ \t]*,[ \t]*') +HEXDIGLC_RE = re.compile("^[0-9a-f]+$") +DELIMITER_FORMAT_RE = re.compile("[ \t]*,[ \t]*") PARENT_TYPE = { - '0': 'App', - '1': 'Browser', - '2': 'Mobile', + "0": "App", + "1": "Browser", + "2": "Mobile", } -BASE64_DECODE_STR = getattr(base64, 'decodestring', None) +BASE64_DECODE_STR = getattr(base64, "decodestring", None) # Functions for encoding/decoding JSON. These wrappers are used in order @@ -48,6 +48,7 @@ # be supplied as key word arguments to allow the wrappers to supply # defaults. + def json_encode(obj, **kwargs): _kwargs = {} @@ -79,21 +80,21 @@ def json_encode(obj, **kwargs): # The third is eliminate white space after separators to trim the # size of the data being sent. - if type(b'') is type(''): # NOQA - _kwargs['encoding'] = 'latin-1' + if type(b"") is type(""): # noqa, pylint: disable=C0123 + _kwargs["encoding"] = "latin-1" def _encode(o): if isinstance(o, bytes): - return o.decode('latin-1') + return o.decode("latin-1") elif isinstance(o, types.GeneratorType): return list(o) - elif hasattr(o, '__iter__'): + elif hasattr(o, "__iter__"): return list(iter(o)) - raise TypeError(repr(o) + ' is not JSON serializable') + raise TypeError(repr(o) + " is not JSON serializable") - _kwargs['default'] = _encode + _kwargs["default"] = _encode - _kwargs['separators'] = (',', ':') + _kwargs["separators"] = (",", ":") # We still allow supplied arguments to override internal defaults if # necessary, but the caller must be sure they aren't dependent on @@ -111,6 +112,7 @@ def json_decode(s, **kwargs): return json.loads(s, **kwargs) + # Functions for obfuscating/deobfuscating text string based on an XOR # cipher. @@ -124,7 +126,7 @@ def xor_cipher_genkey(key, length=None): """ - return bytearray(key[:length], encoding='ascii') + return bytearray(key[:length], encoding="ascii") def xor_cipher_encrypt(text, key): @@ -190,8 +192,8 @@ def xor_cipher_encrypt_base64(text, key): # isn't UTF-8 and so fail with a Unicode decoding error. if isinstance(text, bytes): - text = text.decode('latin-1') - text = text.encode('utf-8').decode('latin-1') + text = text.decode("latin-1") + text = text.encode("utf-8").decode("latin-1") result = base64.b64encode(bytes(xor_cipher_encrypt(text, key))) @@ -202,7 +204,7 @@ def xor_cipher_encrypt_base64(text, key): # produces characters within that codeset. if six.PY3: - return result.decode('ascii') + return result.decode("ascii") return result @@ -223,7 +225,7 @@ def xor_cipher_decrypt_base64(text, key): result = xor_cipher_decrypt(bytearray(base64.b64decode(text)), key) - return bytes(result).decode('utf-8') + return bytes(result).decode("utf-8") obfuscate = xor_cipher_encrypt_base64 @@ -240,13 +242,13 @@ def unpack_field(field): """ if not isinstance(field, bytes): - field = field.encode('UTF-8') + field = field.encode("UTF-8") - data = getattr(base64, 'decodebytes', BASE64_DECODE_STR)(field) + data = getattr(base64, "decodebytes", BASE64_DECODE_STR)(field) data = zlib.decompress(data) if isinstance(data, bytes): - data = data.decode('Latin-1') + data = data.decode("Latin-1") data = json_decode(data) return data @@ -260,13 +262,13 @@ def generate_path_hash(name, seed): """ - rotated = ((seed << 1) | (seed >> 31)) & 0xffffffff + rotated = ((seed << 1) | (seed >> 31)) & 0xFFFFFFFF if not isinstance(name, bytes): - name = name.encode('UTF-8') + name = name.encode("UTF-8") - path_hash = (rotated ^ int(hashlib.md5(name).hexdigest()[-8:], base=16)) - return '%08x' % path_hash + path_hash = rotated ^ int(hashlib.md5(name).hexdigest()[-8:], base=16) # nosec + return "%08x" % path_hash def base64_encode(text): @@ -291,11 +293,11 @@ def base64_encode(text): # and so fail with a Unicode decoding error. if isinstance(text, bytes): - text = text.decode('latin-1') - text = text.encode('utf-8').decode('latin-1') + text = text.decode("latin-1") + text = text.encode("utf-8").decode("latin-1") # Re-encode as utf-8 when passing to b64 encoder - result = base64.b64encode(text.encode('utf-8')) + result = base64.b64encode(text.encode("utf-8")) # The result from base64 encoding will be a byte string but since # dealing with byte strings in Python 2 and Python 3 is quite @@ -304,7 +306,7 @@ def base64_encode(text): # produces characters within that codeset. if six.PY3: - return result.decode('ascii') + return result.decode("ascii") return result @@ -314,7 +316,7 @@ def base64_decode(text): the decoded text is UTF-8 encoded. """ - return base64.b64decode(text).decode('utf-8') + return base64.b64decode(text).decode("utf-8") def gzip_compress(text): @@ -325,9 +327,9 @@ def gzip_compress(text): compressed_data = io.BytesIO() if six.PY3 and isinstance(text, str): - text = text.encode('utf-8') + text = text.encode("utf-8") - with gzip.GzipFile(fileobj=compressed_data, mode='wb') as f: + with gzip.GzipFile(fileobj=compressed_data, mode="wb") as f: f.write(text) return compressed_data.getvalue() @@ -340,7 +342,7 @@ def gzip_decompress(payload): """ data_bytes = io.BytesIO(payload) decoded_data = gzip.GzipFile(fileobj=data_bytes).read() - return decoded_data.decode('utf-8') + return decoded_data.decode("utf-8") def serverless_payload_encode(payload): @@ -358,7 +360,7 @@ def serverless_payload_encode(payload): def ensure_str(s): if not isinstance(s, six.string_types): try: - s = s.decode('utf-8') + s = s.decode("utf-8") except Exception: return return s @@ -370,8 +372,8 @@ def serverless_payload_decode(text): Python object. """ - if hasattr(text, 'decode'): - text = text.decode('utf-8') + if hasattr(text, "decode"): + text = text.decode("utf-8") decoded_bytes = base64.b64decode(text) uncompressed_data = gzip_decompress(decoded_bytes) @@ -384,8 +386,7 @@ def decode_newrelic_header(encoded_header, encoding_key): decoded_header = None if encoded_header: try: - decoded_header = json_decode(deobfuscate( - encoded_header, encoding_key)) + decoded_header = json_decode(deobfuscate(encoded_header, encoding_key)) except Exception: pass @@ -402,7 +403,6 @@ def convert_to_cat_metadata_value(nr_headers): class DistributedTracePayload(dict): - version = (0, 1) def text(self): @@ -437,17 +437,16 @@ def decode(cls, payload): class W3CTraceParent(dict): - def text(self): - if 'id' in self: - guid = self['id'] + if "id" in self: + guid = self["id"] else: - guid = '{:016x}'.format(random.getrandbits(64)) + guid = "{:016x}".format(random.getrandbits(64)) - return '00-{}-{}-{:02x}'.format( - self['tr'].lower().zfill(32), + return "00-{}-{}-{:02x}".format( + self["tr"].lower().zfill(32), guid, - int(self.get('sa', 0)), + int(self.get("sa", 0)), ) @classmethod @@ -456,7 +455,7 @@ def decode(cls, payload): if len(payload) < 55: return None - fields = payload.split('-', 4) + fields = payload.split("-", 4) # Expect that there are at least 4 fields if len(fields) < 4: @@ -469,11 +468,11 @@ def decode(cls, payload): return None # Version 255 is invalid - if version == 'ff': + if version == "ff": return None # Expect exactly 4 fields if version 00 - if version == '00' and len(fields) != 4: + if version == "00" and len(fields) != 4: return None # Check field lengths and values @@ -483,18 +482,15 @@ def decode(cls, payload): # trace_id or parent_id of all 0's are invalid trace_id, parent_id = fields[1:3] - if parent_id == '0' * 16 or trace_id == '0' * 32: + if parent_id == "0" * 16 or trace_id == "0" * 32: return None return cls(tr=trace_id, id=parent_id) class W3CTraceState(OrderedDict): - def text(self, limit=32): - return ','.join( - '{}={}'.format(k, v) - for k, v in itertools.islice(self.items(), limit)) + return ",".join("{}={}".format(k, v) for k, v in itertools.islice(self.items(), limit)) @classmethod def decode(cls, tracestate): @@ -502,9 +498,8 @@ def decode(cls, tracestate): vendors = cls() for entry in entries: - vendor_value = entry.split('=', 2) - if (len(vendor_value) != 2 or - any(len(v) > 256 for v in vendor_value)): + vendor_value = entry.split("=", 2) + if len(vendor_value) != 2 or any(len(v) > 256 for v in vendor_value): continue vendor, value = vendor_value @@ -514,36 +509,38 @@ def decode(cls, tracestate): class NrTraceState(dict): - FIELDS = ('ty', 'ac', 'ap', 'id', 'tx', 'sa', 'pr') + FIELDS = ("ty", "ac", "ap", "id", "tx", "sa", "pr") def text(self): - pr = self.get('pr', '') + pr = self.get("pr", "") if pr: - pr = ('%.6f' % pr).rstrip('0').rstrip('.') - - payload = '-'.join(( - '0-0', - self['ac'], - self['ap'], - self.get('id', ''), - self.get('tx', ''), - '1' if self.get('sa') else '0', - pr, - str(self['ti']), - )) - return '{}@nr={}'.format( - self.get('tk', self['ac']), + pr = ("%.6f" % pr).rstrip("0").rstrip(".") + + payload = "-".join( + ( + "0-0", + self["ac"], + self["ap"], + self.get("id", ""), + self.get("tx", ""), + "1" if self.get("sa") else "0", + pr, + str(self["ti"]), + ) + ) + return "{}@nr={}".format( + self.get("tk", self["ac"]), payload, ) @classmethod def decode(cls, payload, tk): - fields = payload.split('-', 9) + fields = payload.split("-", 9) if len(fields) >= 9 and all(fields[:4]) and fields[8]: data = cls(tk=tk) try: - data['ti'] = int(fields[8]) + data["ti"] = int(fields[8]) except: return @@ -551,23 +548,85 @@ def decode(cls, payload, tk): if value: data[name] = value - if data['ty'] in PARENT_TYPE: - data['ty'] = PARENT_TYPE[data['ty']] + if data["ty"] in PARENT_TYPE: + data["ty"] = PARENT_TYPE[data["ty"]] else: return - if 'sa' in data: - if data['sa'] == '1': - data['sa'] = True - elif data['sa'] == '0': - data['sa'] = False + if "sa" in data: + if data["sa"] == "1": + data["sa"] = True + elif data["sa"] == "0": + data["sa"] = False else: - data['sa'] = None + data["sa"] = None - if 'pr' in data: + if "pr" in data: try: - data['pr'] = float(fields[7]) + data["pr"] = float(fields[7]) except: - data['pr'] = None + data["pr"] = None return data + + +def capitalize(string): + """Capitalize the first letter of a string.""" + if not string: + return string + elif len(string) == 1: + return string.capitalize() + else: + return "".join((string[0].upper(), string[1:])) + + +def camel_case(string, upper=False): + """ + Convert a string of snake case to camel case. + + Setting upper=True will capitalize the first letter. Defaults to False, where no change is made to the first letter. + """ + string = ensure_str(string) + split_string = list(string.split("_")) + + if len(split_string) < 2: + if upper: + return capitalize(string) + else: + return string + else: + if upper: + camel_cased_string = "".join([capitalize(substr) for substr in split_string]) + else: + camel_cased_string = split_string[0] + "".join([capitalize(substr) for substr in split_string[1:]]) + + return camel_cased_string + + +_snake_case_re = re.compile(r"([A-Z]+[a-z]*)") + + +def snake_case(string): + """Convert a string of camel case to snake case. Assumes no repeated runs of capital letters.""" + string = ensure_str(string) + if "_" in string: + return string # Don't touch strings that are already snake cased + + return "_".join([s for s in _snake_case_re.split(string) if s]).lower() + + +_obfuscate_license_key_ending = "*" * 32 + + +def obfuscate_license_key(license_key): + """Obfuscate license key to allow it to be printed out.""" + + if not isinstance(license_key, six.string_types): + # For non-string values passed in such as None, return the original. + return license_key + elif len(license_key) == 40: + # For valid license keys of length 40, show the first 8 characters and then replace the remainder with **** + return license_key[:8] + _obfuscate_license_key_ending + else: + # For invalid lengths of license key, it's unclear how much is acceptable to show, so fully redact with **** + return "*" * len(license_key) diff --git a/newrelic/common/object_wrapper.py b/newrelic/common/object_wrapper.py index 7d9824fe0c..09c737fd2b 100644 --- a/newrelic/common/object_wrapper.py +++ b/newrelic/common/object_wrapper.py @@ -19,16 +19,19 @@ """ -import sys import inspect - -from newrelic.packages import six - -from newrelic.packages.wrapt import (ObjectProxy as _ObjectProxy, - FunctionWrapper as _FunctionWrapper, - BoundFunctionWrapper as _BoundFunctionWrapper) - -from newrelic.packages.wrapt.wrappers import _FunctionWrapperBase +import warnings + +from newrelic.packages.wrapt import BoundFunctionWrapper as _BoundFunctionWrapper +from newrelic.packages.wrapt import CallableObjectProxy as _CallableObjectProxy +from newrelic.packages.wrapt import FunctionWrapper as _FunctionWrapper +from newrelic.packages.wrapt import ObjectProxy as _ObjectProxy +from newrelic.packages.wrapt import ( # noqa: F401; pylint: disable=W0611 + apply_patch, + resolve_path, + wrap_object, + wrap_object_attribute, +) # We previously had our own pure Python implementation of the generic # object wrapper but we now defer to using the wrapt module as its C @@ -47,28 +50,36 @@ # ObjectProxy or FunctionWrapper should be used going forward. -class _ObjectWrapperBase(object): +class ObjectProxy(_ObjectProxy): + """ + This class provides method overrides for all object wrappers used by the + agent. These methods allow attributes to be defined with the special prefix + _nr_ to be interpretted as attributes on the wrapper, rather than the + wrapped object. Inheriting from the base class wrapt.ObjectProxy preserves + method resolution order (MRO) through multiple inheritance. + (See https://www.python.org/download/releases/2.3/mro/). + """ def __setattr__(self, name, value): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) + if name.startswith("_nr_"): + name = name.replace("_nr_", "_self_", 1) setattr(self, name, value) else: - _ObjectProxy.__setattr__(self, name, value) + super(ObjectProxy, self).__setattr__(name, value) def __getattr__(self, name): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) + if name.startswith("_nr_"): + name = name.replace("_nr_", "_self_", 1) return getattr(self, name) else: - return _ObjectProxy.__getattr__(self, name) + return super(ObjectProxy, self).__getattr__(name) def __delattr__(self, name): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) + if name.startswith("_nr_"): + name = name.replace("_nr_", "_self_", 1) delattr(self, name) else: - _ObjectProxy.__delattr__(self, name) + super(ObjectProxy, self).__delattr__(name) @property def _nr_next_object(self): @@ -79,8 +90,7 @@ def _nr_last_object(self): try: return self._self_last_object except AttributeError: - self._self_last_object = getattr(self.__wrapped__, - '_nr_last_object', self.__wrapped__) + self._self_last_object = getattr(self.__wrapped__, "_nr_last_object", self.__wrapped__) return self._self_last_object @property @@ -96,166 +106,39 @@ def _nr_parent(self): return self._self_parent -class _NRBoundFunctionWrapper(_ObjectWrapperBase, _BoundFunctionWrapper): +class _NRBoundFunctionWrapper(ObjectProxy, _BoundFunctionWrapper): pass -class FunctionWrapper(_ObjectWrapperBase, _FunctionWrapper): +class FunctionWrapper(ObjectProxy, _FunctionWrapper): __bound_function_wrapper__ = _NRBoundFunctionWrapper -class ObjectProxy(_ObjectProxy): - - def __setattr__(self, name, value): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) - setattr(self, name, value) - else: - _ObjectProxy.__setattr__(self, name, value) - - def __getattr__(self, name): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) - return getattr(self, name) - else: - return _ObjectProxy.__getattr__(self, name) - - def __delattr__(self, name): - if name.startswith('_nr_'): - name = name.replace('_nr_', '_self_', 1) - delattr(self, name) - else: - _ObjectProxy.__delattr__(self, name) - - @property - def _nr_next_object(self): - return self.__wrapped__ - - @property - def _nr_last_object(self): - try: - return self._self_last_object - except AttributeError: - self._self_last_object = getattr(self.__wrapped__, - '_nr_last_object', self.__wrapped__) - return self._self_last_object - - -class CallableObjectProxy(ObjectProxy): +class CallableObjectProxy(ObjectProxy, _CallableObjectProxy): + pass - def __call__(self, *args, **kwargs): - return self.__wrapped__(*args, **kwargs) # The ObjectWrapper class needs to be deprecated and removed once all our # own code no longer uses it. It reaches down into what are wrapt internals # at present which shouldn't be doing. - -class ObjectWrapper(_ObjectWrapperBase, _FunctionWrapperBase): - __bound_function_wrapper__ = _NRBoundFunctionWrapper - +class ObjectWrapper(FunctionWrapper): def __init__(self, wrapped, instance, wrapper): - if isinstance(wrapped, classmethod): - binding = 'classmethod' - elif isinstance(wrapped, staticmethod): - binding = 'staticmethod' - else: - binding = 'function' - - super(ObjectWrapper, self).__init__(wrapped, instance, wrapper, - binding=binding) + warnings.warn( + ("The ObjectWrapper API is deprecated. Please use one of ObjectProxy, FunctionWrapper, or CallableObjectProxy instead."), + DeprecationWarning, + ) + super(ObjectWrapper, self).__init__(wrapped, wrapper) -# Helper functions for performing monkey patching. - - -def resolve_path(module, name): - if isinstance(module, six.string_types): - __import__(module) - module = sys.modules[module] - - parent = module - - path = name.split('.') - attribute = path[0] - - original = getattr(parent, attribute) - for attribute in path[1:]: - parent = original - - # We can't just always use getattr() because in doing - # that on a class it will cause binding to occur which - # will complicate things later and cause some things not - # to work. For the case of a class we therefore access - # the __dict__ directly. To cope though with the wrong - # class being given to us, or a method being moved into - # a base class, we need to walk the class hierarchy to - # work out exactly which __dict__ the method was defined - # in, as accessing it from __dict__ will fail if it was - # not actually on the class given. Fallback to using - # getattr() if we can't find it. If it truly doesn't - # exist, then that will fail. - - if inspect.isclass(original): - for cls in inspect.getmro(original): - if attribute in vars(cls): - original = vars(cls)[attribute] - break - else: - original = getattr(original, attribute) - - else: - original = getattr(original, attribute) - - return (parent, attribute, original) - - -def apply_patch(parent, attribute, replacement): - setattr(parent, attribute, replacement) - - -def wrap_object(module, name, factory, args=(), kwargs={}): - (parent, attribute, original) = resolve_path(module, name) - wrapper = factory(original, *args, **kwargs) - apply_patch(parent, attribute, wrapper) - return wrapper - -# Function for apply a proxy object to an attribute of a class instance. -# The wrapper works by defining an attribute of the same name on the -# class which is a descriptor and which intercepts access to the -# instance attribute. Note that this cannot be used on attributes which -# are themselves defined by a property object. - - -class AttributeWrapper(object): - - def __init__(self, attribute, factory, args, kwargs): - self.attribute = attribute - self.factory = factory - self.args = args - self.kwargs = kwargs - - def __get__(self, instance, owner): - value = instance.__dict__[self.attribute] - return self.factory(value, *self.args, **self.kwargs) - - def __set__(self, instance, value): - instance.__dict__[self.attribute] = value - - def __delete__(self, instance): - del instance.__dict__[self.attribute] - - -def wrap_object_attribute(module, name, factory, args=(), kwargs={}): - path, attribute = name.rsplit('.', 1) - parent = resolve_path(module, path)[2] - wrapper = AttributeWrapper(attribute, factory, args, kwargs) - apply_patch(parent, attribute, wrapper) - return wrapper - # Function for creating a decorator for applying to functions, as well as # short cut functions for applying wrapper functions via monkey patching. +# WARNING: These functions are reproduced directly from wrapt, but using +# our FunctionWrapper class which includes the _nr_ attriubte overrides +# that are inherited from our subclass of wrapt.ObjectProxy.These MUST be +# kept in sync with wrapt when upgrading, or drift may introduce bugs. + def function_wrapper(wrapper): def _wrapper(wrapped, instance, args, kwargs): @@ -267,6 +150,7 @@ def _wrapper(wrapped, instance, args, kwargs): else: target_wrapper = wrapper.__get__(instance, type(instance)) return FunctionWrapper(target_wrapped, target_wrapper) + return FunctionWrapper(wrapper, _wrapper) @@ -274,9 +158,10 @@ def wrap_function_wrapper(module, name, wrapper): return wrap_object(module, name, FunctionWrapper, (wrapper,)) -def patch_function_wrapper(module, name): +def patch_function_wrapper(module, name, enabled=None): def _wrapper(wrapper): - return wrap_object(module, name, FunctionWrapper, (wrapper,)) + return wrap_object(module, name, FunctionWrapper, (wrapper, enabled)) + return _wrapper @@ -299,10 +184,14 @@ def _execute(wrapped, instance, args, kwargs): return wrapped(*args, **kwargs) finally: setattr(parent, attribute, original) + return FunctionWrapper(target_wrapped, _execute) + return FunctionWrapper(wrapper, _wrapper) + return _decorator + # Generic decorators for performing actions before and after a wrapped # function is called, or modifying the inbound arguments or return value. @@ -315,6 +204,7 @@ def _wrapper(wrapped, instance, args, kwargs): else: function(*args, **kwargs) return wrapped(*args, **kwargs) + return _wrapper @@ -335,6 +225,7 @@ def _wrapper(wrapped, instance, args, kwargs): else: function(*args, **kwargs) return result + return _wrapper @@ -382,6 +273,7 @@ def out_function(function): @function_wrapper def _wrapper(wrapped, instance, args, kwargs): return function(wrapped(*args, **kwargs)) + return _wrapper diff --git a/newrelic/common/package_version_utils.py b/newrelic/common/package_version_utils.py index edefc4c0aa..5081f1bd07 100644 --- a/newrelic/common/package_version_utils.py +++ b/newrelic/common/package_version_utils.py @@ -135,6 +135,7 @@ def _get_package_version(name): if hasattr(sys.modules["importlib"].metadata, "packages_distributions"): # pylint: disable=E1101 distributions = sys.modules["importlib"].metadata.packages_distributions() # pylint: disable=E1101 distribution_name = distributions.get(name, name) + distribution_name = distribution_name[0] if isinstance(distribution_name, list) else distribution_name else: distribution_name = name diff --git a/newrelic/common/signature.py b/newrelic/common/signature.py index 3149981962..3fe516bdc2 100644 --- a/newrelic/common/signature.py +++ b/newrelic/common/signature.py @@ -18,7 +18,7 @@ from inspect import Signature def bind_args(func, args, kwargs): - """Bind arguments and apply defaults to missing arugments for a callable.""" + """Bind arguments and apply defaults to missing arguments for a callable.""" bound_args = Signature.from_callable(func).bind(*args, **kwargs) bound_args.apply_defaults() return bound_args.arguments @@ -27,5 +27,5 @@ def bind_args(func, args, kwargs): from inspect import getcallargs def bind_args(func, args, kwargs): - """Bind arguments and apply defaults to missing arugments for a callable.""" + """Bind arguments and apply defaults to missing arguments for a callable.""" return getcallargs(func, *args, **kwargs) diff --git a/newrelic/common/utilization.py b/newrelic/common/utilization.py index f205b4e132..94cfe2942b 100644 --- a/newrelic/common/utilization.py +++ b/newrelic/common/utilization.py @@ -17,14 +17,14 @@ import re import socket import string -import threading from newrelic.common.agent_http import InsecureHttpClient from newrelic.common.encoding_utils import json_decode from newrelic.core.internal_metrics import internal_count_metric _logger = logging.getLogger(__name__) -VALID_CHARS_RE = re.compile(r'[0-9a-zA-Z_ ./-]') +VALID_CHARS_RE = re.compile(r"[0-9a-zA-Z_ ./-]") + class UtilizationHttpClient(InsecureHttpClient): SOCKET_TIMEOUT = 0.05 @@ -46,38 +46,35 @@ def send_request(self, *args, **kwargs): class CommonUtilization(object): - METADATA_HOST = '' - METADATA_PATH = '' + METADATA_HOST = "" + METADATA_PATH = "" METADATA_QUERY = None HEADERS = None EXPECTED_KEYS = () - VENDOR_NAME = '' + VENDOR_NAME = "" FETCH_TIMEOUT = 0.4 CLIENT_CLS = UtilizationHttpClient @classmethod def record_error(cls, resource, data): # As per spec - internal_count_metric( - 'Supportability/utilization/%s/error' % cls.VENDOR_NAME, 1) - _logger.warning('Invalid %r data (%r): %r', - cls.VENDOR_NAME, resource, data) + internal_count_metric("Supportability/utilization/%s/error" % cls.VENDOR_NAME, 1) + _logger.warning("Invalid %r data (%r): %r", cls.VENDOR_NAME, resource, data) @classmethod def fetch(cls): try: - with cls.CLIENT_CLS(cls.METADATA_HOST, - timeout=cls.FETCH_TIMEOUT) as client: - resp = client.send_request(method='GET', - path=cls.METADATA_PATH, - params=cls.METADATA_QUERY, - headers=cls.HEADERS) + with cls.CLIENT_CLS(cls.METADATA_HOST, timeout=cls.FETCH_TIMEOUT) as client: + resp = client.send_request( + method="GET", path=cls.METADATA_PATH, params=cls.METADATA_QUERY, headers=cls.HEADERS + ) if not 200 <= resp[0] < 300: raise ValueError(resp[0]) return resp[1] except Exception as e: - _logger.debug('Unable to fetch %s data from %s%s: %r', - cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e) + _logger.debug( + "Unable to fetch %s data from %s%s: %r", cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e + ) return None @classmethod @@ -86,11 +83,9 @@ def get_values(cls, response): return try: - return json_decode(response.decode('utf-8')) + return json_decode(response.decode("utf-8")) except ValueError: - _logger.debug('Invalid %s data (%s%s): %r', - cls.VENDOR_NAME, cls.METADATA_HOST, - cls.METADATA_PATH, response) + _logger.debug("Invalid %s data (%s%s): %r", cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, response) @classmethod def valid_chars(cls, data): @@ -108,7 +103,7 @@ def valid_length(cls, data): if data is None: return False - b = data.encode('utf-8') + b = data.encode("utf-8") valid = len(b) <= 255 if valid: return True @@ -123,8 +118,7 @@ def normalize(cls, key, data): try: stripped = data.strip() - if (stripped and cls.valid_length(stripped) and - cls.valid_chars(stripped)): + if stripped and cls.valid_length(stripped) and cls.valid_chars(stripped): return stripped except: pass @@ -158,77 +152,75 @@ def detect(cls): class AWSUtilization(CommonUtilization): - EXPECTED_KEYS = ('availabilityZone', 'instanceId', 'instanceType') - METADATA_HOST = '169.254.169.254' - METADATA_PATH = '/latest/dynamic/instance-identity/document' - METADATA_TOKEN_PATH = '/latest/api/token' - HEADERS = {'X-aws-ec2-metadata-token-ttl-seconds': '21600'} - VENDOR_NAME = 'aws' + EXPECTED_KEYS = ("availabilityZone", "instanceId", "instanceType") + METADATA_HOST = "169.254.169.254" + METADATA_PATH = "/latest/dynamic/instance-identity/document" + METADATA_TOKEN_PATH = "/latest/api/token" + HEADERS = {"X-aws-ec2-metadata-token-ttl-seconds": "21600"} + VENDOR_NAME = "aws" @classmethod def fetchAuthToken(cls): try: - with cls.CLIENT_CLS(cls.METADATA_HOST, - timeout=cls.FETCH_TIMEOUT) as client: - resp = client.send_request(method='PUT', - path=cls.METADATA_TOKEN_PATH, - params=cls.METADATA_QUERY, - headers=cls.HEADERS) + with cls.CLIENT_CLS(cls.METADATA_HOST, timeout=cls.FETCH_TIMEOUT) as client: + resp = client.send_request( + method="PUT", path=cls.METADATA_TOKEN_PATH, params=cls.METADATA_QUERY, headers=cls.HEADERS + ) if not 200 <= resp[0] < 300: raise ValueError(resp[0]) return resp[1] except Exception as e: - _logger.debug('Unable to fetch %s data from %s%s: %r', - cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e) + _logger.debug( + "Unable to fetch %s data from %s%s: %r", cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e + ) return None @classmethod def fetch(cls): try: authToken = cls.fetchAuthToken() - if authToken == None: + if authToken is None: return cls.HEADERS = {"X-aws-ec2-metadata-token": authToken} - with cls.CLIENT_CLS(cls.METADATA_HOST, - timeout=cls.FETCH_TIMEOUT) as client: - resp = client.send_request(method='GET', - path=cls.METADATA_PATH, - params=cls.METADATA_QUERY, - headers=cls.HEADERS) + with cls.CLIENT_CLS(cls.METADATA_HOST, timeout=cls.FETCH_TIMEOUT) as client: + resp = client.send_request( + method="GET", path=cls.METADATA_PATH, params=cls.METADATA_QUERY, headers=cls.HEADERS + ) if not 200 <= resp[0] < 300: raise ValueError(resp[0]) return resp[1] except Exception as e: - _logger.debug('Unable to fetch %s data from %s%s: %r', - cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e) + _logger.debug( + "Unable to fetch %s data from %s%s: %r", cls.VENDOR_NAME, cls.METADATA_HOST, cls.METADATA_PATH, e + ) return None class AzureUtilization(CommonUtilization): - METADATA_HOST = '169.254.169.254' - METADATA_PATH = '/metadata/instance/compute' - METADATA_QUERY = {'api-version': '2017-03-01'} - EXPECTED_KEYS = ('location', 'name', 'vmId', 'vmSize') - HEADERS = {'Metadata': 'true'} - VENDOR_NAME = 'azure' + METADATA_HOST = "169.254.169.254" + METADATA_PATH = "/metadata/instance/compute" + METADATA_QUERY = {"api-version": "2017-03-01"} + EXPECTED_KEYS = ("location", "name", "vmId", "vmSize") + HEADERS = {"Metadata": "true"} + VENDOR_NAME = "azure" class GCPUtilization(CommonUtilization): - EXPECTED_KEYS = ('id', 'machineType', 'name', 'zone') - HEADERS = {'Metadata-Flavor': 'Google'} - METADATA_HOST = 'metadata.google.internal' - METADATA_PATH = '/computeMetadata/v1/instance/' - METADATA_QUERY = {'recursive': 'true'} - VENDOR_NAME = 'gcp' + EXPECTED_KEYS = ("id", "machineType", "name", "zone") + HEADERS = {"Metadata-Flavor": "Google"} + METADATA_HOST = "metadata.google.internal" + METADATA_PATH = "/computeMetadata/v1/instance/" + METADATA_QUERY = {"recursive": "true"} + VENDOR_NAME = "gcp" @classmethod def normalize(cls, key, data): if data is None: return - if key in ('machineType', 'zone'): - formatted = data.strip().split('/')[-1] - elif key == 'id': + if key in ("machineType", "zone"): + formatted = data.strip().split("/")[-1] + elif key == "id": formatted = str(data) else: formatted = data @@ -237,14 +229,14 @@ def normalize(cls, key, data): class PCFUtilization(CommonUtilization): - EXPECTED_KEYS = ('cf_instance_guid', 'cf_instance_ip', 'memory_limit') - VENDOR_NAME = 'pcf' + EXPECTED_KEYS = ("cf_instance_guid", "cf_instance_ip", "memory_limit") + VENDOR_NAME = "pcf" @staticmethod def fetch(): - cf_instance_guid = os.environ.get('CF_INSTANCE_GUID') - cf_instance_ip = os.environ.get('CF_INSTANCE_IP') - memory_limit = os.environ.get('MEMORY_LIMIT') + cf_instance_guid = os.environ.get("CF_INSTANCE_GUID") + cf_instance_ip = os.environ.get("CF_INSTANCE_IP") + memory_limit = os.environ.get("MEMORY_LIMIT") pcf_vars = (cf_instance_guid, cf_instance_ip, memory_limit) if all(pcf_vars): return pcf_vars @@ -256,30 +248,51 @@ def get_values(cls, response): values = {} for k, v in zip(cls.EXPECTED_KEYS, response): - if hasattr(v, 'decode'): - v = v.decode('utf-8') + if hasattr(v, "decode"): + v = v.decode("utf-8") values[k] = v return values class DockerUtilization(CommonUtilization): - VENDOR_NAME = 'docker' - EXPECTED_KEYS = ('id',) - METADATA_FILE = '/proc/self/cgroup' - DOCKER_RE = re.compile(r'([0-9a-f]{64,})') + VENDOR_NAME = "docker" + EXPECTED_KEYS = ("id",) + + METADATA_FILE_CGROUPS_V1 = "/proc/self/cgroup" + METADATA_RE_CGROUPS_V1 = re.compile(r"[0-9a-f]{64,}") + + METADATA_FILE_CGROUPS_V2 = "/proc/self/mountinfo" + METADATA_RE_CGROUPS_V2 = re.compile(r"^.*/docker/containers/([0-9a-f]{64,})/.*$") @classmethod def fetch(cls): + # Try to read from cgroups try: - with open(cls.METADATA_FILE, 'rb') as f: + with open(cls.METADATA_FILE_CGROUPS_V1, "rb") as f: for line in f: - stripped = line.decode('utf-8').strip() - cgroup = stripped.split(':') + stripped = line.decode("utf-8").strip() + cgroup = stripped.split(":") if len(cgroup) != 3: continue - subsystems = cgroup[1].split(',') - if 'cpu' in subsystems: - return cgroup[2] + subsystems = cgroup[1].split(",") + if "cpu" in subsystems: + contents = cgroup[2].split("/")[-1] + match = cls.METADATA_RE_CGROUPS_V1.search(contents) + if match: + return match.group(0) + except: + # There are all sorts of exceptions that can occur here + # (i.e. permissions, non-existent file, etc) + pass + + # Fallback to reading from mountinfo + try: + with open(cls.METADATA_FILE_CGROUPS_V2, "rb") as f: + for line in f: + stripped = line.decode("utf-8").strip() + match = cls.METADATA_RE_CGROUPS_V2.match(stripped) + if match: + return match.group(1) except: # There are all sorts of exceptions that can occur here # (i.e. permissions, non-existent file, etc) @@ -290,11 +303,7 @@ def get_values(cls, contents): if contents is None: return - value = contents.split('/')[-1] - match = cls.DOCKER_RE.search(value) - if match: - value = match.group(0) - return {'id': value} + return {"id": contents} @classmethod def valid_chars(cls, data): @@ -315,20 +324,16 @@ def valid_length(cls, data): return False # Must be exactly 64 characters - valid = len(data) == 64 - if valid: - return True - - return False + return bool(len(data) == 64) class KubernetesUtilization(CommonUtilization): - EXPECTED_KEYS = ('kubernetes_service_host', ) - VENDOR_NAME = 'kubernetes' + EXPECTED_KEYS = ("kubernetes_service_host",) + VENDOR_NAME = "kubernetes" @staticmethod def fetch(): - kubernetes_service_host = os.environ.get('KUBERNETES_SERVICE_HOST') + kubernetes_service_host = os.environ.get("KUBERNETES_SERVICE_HOST") if kubernetes_service_host: return kubernetes_service_host @@ -337,7 +342,7 @@ def get_values(cls, v): if v is None: return - if hasattr(v, 'decode'): - v = v.decode('utf-8') + if hasattr(v, "decode"): + v = v.decode("utf-8") - return {'kubernetes_service_host': v} + return {"kubernetes_service_host": v} diff --git a/newrelic/config.py b/newrelic/config.py index 1d132b4b3b..78ca1ac329 100644 --- a/newrelic/config.py +++ b/newrelic/config.py @@ -34,7 +34,6 @@ import newrelic.api.generator_trace import newrelic.api.import_hook import newrelic.api.memcache_trace -import newrelic.api.object_wrapper import newrelic.api.profile_trace import newrelic.api.settings import newrelic.api.transaction_name @@ -43,7 +42,7 @@ import newrelic.core.agent import newrelic.core.config from newrelic.common.log_file import initialize_logging -from newrelic.common.object_names import expand_builtin_exception_name +from newrelic.common.object_names import callable_name, expand_builtin_exception_name from newrelic.core import trace_cache from newrelic.core.config import ( Settings, @@ -553,11 +552,15 @@ def _process_configuration(section): _process_setting(section, "application_logging.enabled", "getboolean", None) _process_setting(section, "application_logging.forwarding.max_samples_stored", "getint", None) _process_setting(section, "application_logging.forwarding.enabled", "getboolean", None) + _process_setting(section, "application_logging.forwarding.context_data.enabled", "getboolean", None) + _process_setting(section, "application_logging.forwarding.context_data.include", "get", _map_inc_excl_attributes) + _process_setting(section, "application_logging.forwarding.context_data.exclude", "get", _map_inc_excl_attributes) _process_setting(section, "application_logging.metrics.enabled", "getboolean", None) _process_setting(section, "application_logging.local_decorating.enabled", "getboolean", None) _process_setting(section, "machine_learning.enabled", "getboolean", None) _process_setting(section, "machine_learning.inference_events_value.enabled", "getboolean", None) + _process_setting(section, "package_reporting.enabled", "getboolean", None) # Loading of configuration from specified file and for specified @@ -1345,7 +1348,7 @@ def _process_background_task_configuration(): group = _config_object.get(section, "group") if name and name.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} name = eval(name, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register background-task %s", ((module, object_path, application, name, group),)) @@ -1395,7 +1398,7 @@ def _process_database_trace_configuration(): sql = _config_object.get(section, "sql") if sql.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} sql = eval(sql, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register database-trace %s", ((module, object_path, sql),)) @@ -1450,11 +1453,11 @@ def _process_external_trace_configuration(): method = _config_object.get(section, "method") if url.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} url = eval(url, callable_vars) # nosec, pylint: disable=W0123 if method and method.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} method = eval(method, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register external-trace %s", ((module, object_path, library, url, method),)) @@ -1522,7 +1525,7 @@ def _process_function_trace_configuration(): rollup = _config_object.get(section, "rollup") if name and name.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} name = eval(name, callable_vars) # nosec, pylint: disable=W0123 _logger.debug( @@ -1580,7 +1583,7 @@ def _process_generator_trace_configuration(): group = _config_object.get(section, "group") if name and name.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} name = eval(name, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register generator-trace %s", ((module, object_path, name, group),)) @@ -1639,7 +1642,7 @@ def _process_profile_trace_configuration(): depth = _config_object.get(section, "depth") if name and name.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} name = eval(name, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register profile-trace %s", ((module, object_path, name, group, depth),)) @@ -1689,7 +1692,7 @@ def _process_memcache_trace_configuration(): command = _config_object.get(section, "command") if command.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} command = eval(command, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register memcache-trace %s", (module, object_path, command)) @@ -1749,7 +1752,7 @@ def _process_transaction_name_configuration(): priority = _config_object.getint(section, "priority") if name and name.startswith("lambda "): - callable_vars = {"callable_name": newrelic.api.object_wrapper.callable_name} + callable_vars = {"callable_name": callable_name} name = eval(name, callable_vars) # nosec, pylint: disable=W0123 _logger.debug("register transaction-name %s", ((module, object_path, name, group, priority),)) @@ -2228,6 +2231,12 @@ def _process_module_builtin_defaults(): "instrument_langchain_vectorstore_similarity_search", ) + _process_module_definition( + "langchain_community.vectorstores.lantern", + "newrelic.hooks.mlmodel_langchain", + "instrument_langchain_vectorstore_similarity_search", + ) + _process_module_definition( "langchain_community.vectorstores.llm_rails", "newrelic.hooks.mlmodel_langchain", @@ -2896,7 +2905,11 @@ def _process_module_builtin_defaults(): "newrelic.hooks.logger_structlog", "instrument_structlog__base", ) - + _process_module_definition( + "structlog._frames", + "newrelic.hooks.logger_structlog", + "instrument_structlog__frames", + ) _process_module_definition( "paste.httpserver", "newrelic.hooks.adapter_paste", @@ -4371,13 +4384,21 @@ def _process_module_builtin_defaults(): def _process_module_entry_points(): try: - import pkg_resources + # Preferred after Python 3.10 + if sys.version_info >= (3, 10): + from importlib.metadata import entry_points + # Introduced in Python 3.8 + elif sys.version_info >= (3, 8) and sys.version_info <= (3, 9): + from importlib_metadata import entry_points + # Removed in Python 3.12 + else: + from pkg_resources import iter_entry_points as entry_points except ImportError: return group = "newrelic.hooks" - for entrypoint in pkg_resources.iter_entry_points(group=group): + for entrypoint in entry_points(group=group): target = entrypoint.name if target in _module_import_hook_registry: @@ -4435,13 +4456,21 @@ def _setup_instrumentation(): def _setup_extensions(): try: - import pkg_resources + # Preferred after Python 3.10 + if sys.version_info >= (3, 10): + from importlib.metadata import entry_points + # Introduced in Python 3.8 + elif sys.version_info >= (3, 8) and sys.version_info <= (3, 9): + from importlib_metadata import entry_points + # Removed in Python 3.12 + else: + from pkg_resources import iter_entry_points as entry_points except ImportError: return group = "newrelic.extension" - for entrypoint in pkg_resources.iter_entry_points(group=group): + for entrypoint in entry_points(group=group): __import__(entrypoint.module_name) module = sys.modules[entrypoint.module_name] module.initialize() diff --git a/newrelic/console.py b/newrelic/console.py index 48cda6e7cc..9393721eec 100644 --- a/newrelic/console.py +++ b/newrelic/console.py @@ -63,7 +63,6 @@ def doc_signature(func): sig._parameters = OrderedDict(list(sig._parameters.items())[1:]) return str(sig) - except ImportError: from inspect import formatargspec @@ -72,11 +71,10 @@ def doc_signature(func): return formatargspec(args[1:], varargs, keywords, defaults) -from newrelic.api.object_wrapper import ObjectWrapper -from newrelic.api.transaction import Transaction -from newrelic.core.agent import agent_instance -from newrelic.core.config import flatten_settings, global_settings -from newrelic.core.trace_cache import trace_cache +from newrelic.common.object_wrapper import ObjectProxy # noqa: E402 +from newrelic.core.agent import agent_instance # noqa: E402 +from newrelic.core.config import flatten_settings, global_settings # noqa: E402 +from newrelic.core.trace_cache import trace_cache # noqa: E402 _trace_cache = trace_cache() @@ -161,7 +159,7 @@ def __call__(self, code=None): __builtin__.exit = Quitter("exit") -class OutputWrapper(ObjectWrapper): +class OutputWrapper(ObjectProxy): def flush(self): try: shell = _consoles.active @@ -187,8 +185,8 @@ def writelines(self, data): def intercept_console(): setquit() - sys.stdout = OutputWrapper(sys.stdout, None, None) - sys.stderr = OutputWrapper(sys.stderr, None, None) + sys.stdout = OutputWrapper(sys.stdout) + sys.stderr = OutputWrapper(sys.stderr) class EmbeddedConsole(code.InteractiveConsole): @@ -205,7 +203,6 @@ def raw_input(self, prompt): class ConsoleShell(cmd.Cmd): - use_rawinput = 0 def __init__(self): @@ -534,7 +531,6 @@ def __thread_run(self): class ClientShell(cmd.Cmd): - prompt = "(newrelic) " def __init__(self, config_file, stdin=None, stdout=None, log=None): diff --git a/newrelic/core/agent.py b/newrelic/core/agent.py index 9d9aadab16..31cef43e89 100644 --- a/newrelic/core/agent.py +++ b/newrelic/core/agent.py @@ -339,7 +339,6 @@ def activate_application(self, app_name, linked_applications=None, timeout=None, with self._lock: application = self._applications.get(app_name, None) if not application: - process_id = os.getpid() if process_id != self._process_id: @@ -449,7 +448,6 @@ def register_data_source(self, source, application=None, name=None, settings=Non instance.register_data_source(source, name, settings, **properties) def remove_thread_utilization(self): - _logger.debug("Removing thread utilization data source from all applications") source_name = thread_utilization_data_source.__name__ @@ -565,12 +563,12 @@ def record_ml_event(self, app_name, event_type, params): application.record_ml_event(event_type, params) - def record_log_event(self, app_name, message, level=None, timestamp=None, priority=None): + def record_log_event(self, app_name, message, level=None, timestamp=None, attributes=None, priority=None): application = self._applications.get(app_name, None) if application is None or not application.active: return - application.record_log_event(message, level, timestamp, priority=priority) + application.record_log_event(message, level, timestamp, attributes=attributes, priority=priority) def record_transaction(self, app_name, data): """Processes the raw transaction data, generating and recording diff --git a/newrelic/core/application.py b/newrelic/core/application.py index e1ada60aac..732a0dce0d 100644 --- a/newrelic/core/application.py +++ b/newrelic/core/application.py @@ -939,15 +939,16 @@ def record_ml_event(self, event_type, params): self._global_events_account += 1 self._stats_engine.record_ml_event(event) - def record_log_event(self, message, level=None, timestamp=None, priority=None): + def record_log_event(self, message, level=None, timestamp=None, attributes=None, priority=None): if not self._active_session: return - if message: - with self._stats_custom_lock: - event = self._stats_engine.record_log_event(message, level, timestamp, priority=priority) - if event: - self._global_events_account += 1 + with self._stats_custom_lock: + event = self._stats_engine.record_log_event( + message, level, timestamp, attributes=attributes, priority=priority + ) + if event: + self._global_events_account += 1 def record_transaction(self, data): """Record a single transaction against this application.""" diff --git a/newrelic/core/attribute.py b/newrelic/core/attribute.py index a872b4b1b0..880597a052 100644 --- a/newrelic/core/attribute.py +++ b/newrelic/core/attribute.py @@ -18,6 +18,7 @@ from newrelic.core.attribute_filter import ( DST_ALL, DST_ERROR_COLLECTOR, + DST_LOG_EVENT_CONTEXT_DATA, DST_SPAN_EVENTS, DST_TRANSACTION_EVENTS, DST_TRANSACTION_SEGMENTS, @@ -176,6 +177,32 @@ def resolve_agent_attributes(attr_dict, attribute_filter, target_destination, at return a_attrs +def resolve_logging_context_attributes(attr_dict, attribute_filter, attr_prefix, attr_class=dict): + """ + Helper function for processing logging context attributes that require a prefix. Correctly filters attribute names + before applying the required prefix, and then applies the process_user_attribute after the prefix is applied to + correctly check length requirements. + """ + c_attrs = attr_class() + + for attr_name, attr_value in attr_dict.items(): + dest = attribute_filter.apply(attr_name, DST_LOG_EVENT_CONTEXT_DATA) + + if dest & DST_LOG_EVENT_CONTEXT_DATA: + try: + attr_name, attr_value = process_user_attribute(attr_prefix + attr_name, attr_value) + if attr_name: + c_attrs[attr_name] = attr_value + except Exception: + _logger.debug( + "Log event context attribute failed to validate for unknown reason. Dropping context attribute: %s. Check traceback for clues.", + attr_name, + exc_info=True, + ) + + return c_attrs + + def create_user_attributes(attr_dict, attribute_filter): destinations = DST_ALL return create_attributes(attr_dict, destinations, attribute_filter) diff --git a/newrelic/core/attribute_filter.py b/newrelic/core/attribute_filter.py index 8d4a93843b..8cd26cb30f 100644 --- a/newrelic/core/attribute_filter.py +++ b/newrelic/core/attribute_filter.py @@ -15,16 +15,17 @@ # Attribute "destinations" represented as bitfields. DST_NONE = 0x0 -DST_ALL = 0x3F -DST_TRANSACTION_EVENTS = 1 << 0 -DST_TRANSACTION_TRACER = 1 << 1 -DST_ERROR_COLLECTOR = 1 << 2 -DST_BROWSER_MONITORING = 1 << 3 -DST_SPAN_EVENTS = 1 << 4 +DST_ALL = 0x7F +DST_TRANSACTION_EVENTS = 1 << 0 +DST_TRANSACTION_TRACER = 1 << 1 +DST_ERROR_COLLECTOR = 1 << 2 +DST_BROWSER_MONITORING = 1 << 3 +DST_SPAN_EVENTS = 1 << 4 DST_TRANSACTION_SEGMENTS = 1 << 5 +DST_LOG_EVENT_CONTEXT_DATA = 1 << 6 -class AttributeFilter(object): +class AttributeFilter(object): # Apply filtering rules to attributes. # # Upon initialization, an AttributeFilter object will take all attribute @@ -59,46 +60,45 @@ class AttributeFilter(object): # 4. Return the resulting bitfield after all rules have been applied. def __init__(self, flattened_settings): - self.enabled_destinations = self._set_enabled_destinations(flattened_settings) self.rules = self._build_rules(flattened_settings) self.cache = {} def __repr__(self): - return "" % ( - bin(self.enabled_destinations), self.rules) + return "" % (bin(self.enabled_destinations), self.rules) def _set_enabled_destinations(self, settings): - # Determines and returns bitfield representing attribute destinations enabled. enabled_destinations = DST_NONE - if settings.get('transaction_segments.attributes.enabled', None): + if settings.get("transaction_segments.attributes.enabled", None): enabled_destinations |= DST_TRANSACTION_SEGMENTS - if settings.get('span_events.attributes.enabled', None): + if settings.get("span_events.attributes.enabled", None): enabled_destinations |= DST_SPAN_EVENTS - if settings.get('transaction_tracer.attributes.enabled', None): + if settings.get("transaction_tracer.attributes.enabled", None): enabled_destinations |= DST_TRANSACTION_TRACER - if settings.get('transaction_events.attributes.enabled', None): + if settings.get("transaction_events.attributes.enabled", None): enabled_destinations |= DST_TRANSACTION_EVENTS - if settings.get('error_collector.attributes.enabled', None): + if settings.get("error_collector.attributes.enabled", None): enabled_destinations |= DST_ERROR_COLLECTOR - if settings.get('browser_monitoring.attributes.enabled', None): + if settings.get("browser_monitoring.attributes.enabled", None): enabled_destinations |= DST_BROWSER_MONITORING - if not settings.get('attributes.enabled', None): + if settings.get("application_logging.forwarding.context_data.enabled", None): + enabled_destinations |= DST_LOG_EVENT_CONTEXT_DATA + + if not settings.get("attributes.enabled", None): enabled_destinations = DST_NONE return enabled_destinations def _build_rules(self, settings): - # "Rule Templates" below are used for building AttributeFilterRules. # # Each tuple includes: @@ -107,26 +107,27 @@ def _build_rules(self, settings): # 3. Boolean that represents whether the setting is an "include" or not. rule_templates = ( - ('attributes.include', DST_ALL, True), - ('attributes.exclude', DST_ALL, False), - ('transaction_events.attributes.include', DST_TRANSACTION_EVENTS, True), - ('transaction_events.attributes.exclude', DST_TRANSACTION_EVENTS, False), - ('transaction_tracer.attributes.include', DST_TRANSACTION_TRACER, True), - ('transaction_tracer.attributes.exclude', DST_TRANSACTION_TRACER, False), - ('error_collector.attributes.include', DST_ERROR_COLLECTOR, True), - ('error_collector.attributes.exclude', DST_ERROR_COLLECTOR, False), - ('browser_monitoring.attributes.include', DST_BROWSER_MONITORING, True), - ('browser_monitoring.attributes.exclude', DST_BROWSER_MONITORING, False), - ('span_events.attributes.include', DST_SPAN_EVENTS, True), - ('span_events.attributes.exclude', DST_SPAN_EVENTS, False), - ('transaction_segments.attributes.include', DST_TRANSACTION_SEGMENTS, True), - ('transaction_segments.attributes.exclude', DST_TRANSACTION_SEGMENTS, False), + ("attributes.include", DST_ALL, True), + ("attributes.exclude", DST_ALL, False), + ("transaction_events.attributes.include", DST_TRANSACTION_EVENTS, True), + ("transaction_events.attributes.exclude", DST_TRANSACTION_EVENTS, False), + ("transaction_tracer.attributes.include", DST_TRANSACTION_TRACER, True), + ("transaction_tracer.attributes.exclude", DST_TRANSACTION_TRACER, False), + ("error_collector.attributes.include", DST_ERROR_COLLECTOR, True), + ("error_collector.attributes.exclude", DST_ERROR_COLLECTOR, False), + ("browser_monitoring.attributes.include", DST_BROWSER_MONITORING, True), + ("browser_monitoring.attributes.exclude", DST_BROWSER_MONITORING, False), + ("span_events.attributes.include", DST_SPAN_EVENTS, True), + ("span_events.attributes.exclude", DST_SPAN_EVENTS, False), + ("transaction_segments.attributes.include", DST_TRANSACTION_SEGMENTS, True), + ("transaction_segments.attributes.exclude", DST_TRANSACTION_SEGMENTS, False), + ("application_logging.forwarding.context_data.include", DST_LOG_EVENT_CONTEXT_DATA, True), + ("application_logging.forwarding.context_data.exclude", DST_LOG_EVENT_CONTEXT_DATA, False), ) rules = [] - for (setting_name, destination, is_include) in rule_templates: - + for setting_name, destination, is_include in rule_templates: for setting in settings.get(setting_name) or (): rule = AttributeFilterRule(setting, destination, is_include) rules.append(rule) @@ -157,16 +158,15 @@ def apply(self, name, default_destinations): self.cache[cache_index] = destinations return destinations -class AttributeFilterRule(object): +class AttributeFilterRule(object): def __init__(self, name, destinations, is_include): - self.name = name.rstrip('*') + self.name = name.rstrip("*") self.destinations = destinations self.is_include = is_include - self.is_wildcard = name.endswith('*') + self.is_wildcard = name.endswith("*") def _as_sortable(self): - # Represent AttributeFilterRule as a tuple that will sort properly. # # Sorting rules: @@ -207,8 +207,7 @@ def __ge__(self, other): return self._as_sortable() >= other._as_sortable() def __repr__(self): - return '(%s, %s, %s, %s)' % (self.name, bin(self.destinations), - self.is_wildcard, self.is_include) + return "(%s, %s, %s, %s)" % (self.name, bin(self.destinations), self.is_wildcard, self.is_include) def name_match(self, name): if self.is_wildcard: diff --git a/newrelic/core/config.py b/newrelic/core/config.py index 2128483481..b140fc86ea 100644 --- a/newrelic/core/config.py +++ b/newrelic/core/config.py @@ -144,6 +144,10 @@ class MachineLearningInferenceEventsValueSettings(Settings): pass +class PackageReportingSettings(Settings): + pass + + class CodeLevelMetricsSettings(Settings): pass @@ -298,6 +302,10 @@ class ApplicationLoggingForwardingSettings(Settings): pass +class ApplicationLoggingForwardingContextDataSettings(Settings): + pass + + class ApplicationLoggingMetricsSettings(Settings): pass @@ -395,10 +403,13 @@ class EventHarvestConfigHarvestLimitSettings(Settings): _settings.agent_limits = AgentLimitsSettings() _settings.application_logging = ApplicationLoggingSettings() _settings.application_logging.forwarding = ApplicationLoggingForwardingSettings() +_settings.application_logging.forwarding.context_data = ApplicationLoggingForwardingContextDataSettings() +_settings.application_logging.metrics = ApplicationLoggingMetricsSettings() _settings.application_logging.local_decorating = ApplicationLoggingLocalDecoratingSettings() _settings.application_logging.metrics = ApplicationLoggingMetricsSettings() _settings.machine_learning = MachineLearningSettings() _settings.machine_learning.inference_events_value = MachineLearningInferenceEventsValueSettings() +_settings.package_reporting = PackageReportingSettings() _settings.attributes = AttributesSettings() _settings.browser_monitoring = BrowserMonitorSettings() _settings.browser_monitoring.attributes = BrowserMonitorAttributesSettings() @@ -893,6 +904,15 @@ def default_otlp_host(host): _settings.application_logging.forwarding.enabled = _environ_as_bool( "NEW_RELIC_APPLICATION_LOGGING_FORWARDING_ENABLED", default=True ) +_settings.application_logging.forwarding.context_data.enabled = _environ_as_bool( + "NEW_RELIC_APPLICATION_LOGGING_FORWARDING_CONTEXT_DATA_ENABLED", default=False +) +_settings.application_logging.forwarding.context_data.include = _environ_as_set( + "NEW_RELIC_APPLICATION_LOGGING_FORWARDING_CONTEXT_DATA_INCLUDE", default="" +) +_settings.application_logging.forwarding.context_data.exclude = _environ_as_set( + "NEW_RELIC_APPLICATION_LOGGING_FORWARDING_CONTEXT_DATA_EXCLUDE", default="" +) _settings.application_logging.metrics.enabled = _environ_as_bool( "NEW_RELIC_APPLICATION_LOGGING_METRICS_ENABLED", default=True ) @@ -903,6 +923,7 @@ def default_otlp_host(host): _settings.machine_learning.inference_events_value.enabled = _environ_as_bool( "NEW_RELIC_MACHINE_LEARNING_INFERENCE_EVENT_VALUE_ENABLED", default=False ) +_settings.package_reporting.enabled = _environ_as_bool("NEW_RELIC_PACKAGE_REPORTING_ENABLED", default=True) _settings.ml_insights_events.enabled = _environ_as_bool("NEW_RELIC_ML_INSIGHTS_EVENTS_ENABLED", default=False) diff --git a/newrelic/core/environment.py b/newrelic/core/environment.py index 9bca085a3a..6d24eced50 100644 --- a/newrelic/core/environment.py +++ b/newrelic/core/environment.py @@ -29,6 +29,7 @@ physical_processor_count, total_physical_memory, ) +from newrelic.core.config import global_settings from newrelic.packages.isort import stdlibs as isort_stdlibs try: @@ -202,44 +203,46 @@ def environment_settings(): plugins = [] - # Using any iterable to create a snapshot of sys.modules can occassionally - # fail in a rare case when modules are imported in parallel by different - # threads. - # - # TL;DR: Do NOT use an iterable on the original sys.modules to generate the - # list - for name, module in sys.modules.copy().items(): - # Exclude lib.sub_paths as independent modules except for newrelic.hooks. - nr_hook = name.startswith("newrelic.hooks.") - if "." in name and not nr_hook or name.startswith("_"): - continue - - # If the module isn't actually loaded (such as failed relative imports - # in Python 2.7), the module will be None and should not be reported. - try: - if not module: + settings = global_settings() + if settings and settings.package_reporting.enabled: + # Using any iterable to create a snapshot of sys.modules can occassionally + # fail in a rare case when modules are imported in parallel by different + # threads. + # + # TL;DR: Do NOT use an iterable on the original sys.modules to generate the + # list + for name, module in sys.modules.copy().items(): + # Exclude lib.sub_paths as independent modules except for newrelic.hooks. + nr_hook = name.startswith("newrelic.hooks.") + if "." in name and not nr_hook or name.startswith("_"): continue - except Exception: - # if the application uses generalimport to manage optional depedencies, - # it's possible that generalimport.MissingOptionalDependency is raised. - # In this case, we should not report the module as it is not actually loaded and - # is not a runtime dependency of the application. - # - continue - - # Exclude standard library/built-in modules. - if name in stdlib_builtin_module_names: - continue - - try: - version = get_package_version(name) - except Exception: - version = None - - # If it has no version it's likely not a real package so don't report it unless - # it's a new relic hook. - if version or nr_hook: - plugins.append("%s (%s)" % (name, version)) + + # If the module isn't actually loaded (such as failed relative imports + # in Python 2.7), the module will be None and should not be reported. + try: + if not module: + continue + except Exception: + # if the application uses generalimport to manage optional depedencies, + # it's possible that generalimport.MissingOptionalDependency is raised. + # In this case, we should not report the module as it is not actually loaded and + # is not a runtime dependency of the application. + # + continue + + # Exclude standard library/built-in modules. + if name in stdlib_builtin_module_names: + continue + + try: + version = get_package_version(name) + except Exception: + version = None + + # If it has no version it's likely not a real package so don't report it unless + # it's a new relic hook. + if version or nr_hook: + plugins.append("%s (%s)" % (name, version)) env.append(("Plugin List", plugins)) diff --git a/newrelic/core/internal_metrics.py b/newrelic/core/internal_metrics.py index 87452fce4a..2413cdb1f7 100644 --- a/newrelic/core/internal_metrics.py +++ b/newrelic/core/internal_metrics.py @@ -12,16 +12,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -import functools -import sys -import types -import time import threading +import time + +import newrelic.common.object_wrapper _context = threading.local() -class InternalTrace(object): +class InternalTrace(object): def __init__(self, name, metrics=None): self.name = name self.metrics = metrics @@ -29,7 +28,7 @@ def __init__(self, name, metrics=None): def __enter__(self): if self.metrics is None: - self.metrics = getattr(_context, 'current', None) + self.metrics = getattr(_context, "current", None) self.start = time.time() return self @@ -38,8 +37,8 @@ def __exit__(self, exc, value, tb): if self.metrics is not None: self.metrics.record_custom_metric(self.name, duration) -class InternalTraceWrapper(object): +class InternalTraceWrapper(object): def __init__(self, wrapped, name): if type(wrapped) == type(()): (instance, wrapped) = wrapped @@ -59,7 +58,7 @@ def __get__(self, instance, klass): return self.__class__((instance, descriptor), self.__name) def __call__(self, *args, **kwargs): - metrics = getattr(_context, 'current', None) + metrics = getattr(_context, "current", None) if metrics is None: return self.__wrapped(*args, **kwargs) @@ -67,14 +66,14 @@ def __call__(self, *args, **kwargs): with InternalTrace(self.__name, metrics): return self.__wrapped(*args, **kwargs) -class InternalTraceContext(object): +class InternalTraceContext(object): def __init__(self, metrics): self.previous = None self.metrics = metrics def __enter__(self): - self.previous = getattr(_context, 'current', None) + self.previous = getattr(_context, "current", None) _context.current = self.metrics return self @@ -82,25 +81,29 @@ def __exit__(self, exc, value, tb): if self.previous is not None: _context.current = self.previous + def internal_trace(name=None): def decorator(wrapped): return InternalTraceWrapper(wrapped, name) + return decorator + def wrap_internal_trace(module, object_path, name=None): - newrelic.api.object_wrapper.wrap_object(module, object_path, - InternalTraceWrapper, (name,)) + newrelic.common.object_wrapper.wrap_object(module, object_path, InternalTraceWrapper, (name,)) + def internal_metric(name, value): - metrics = getattr(_context, 'current', None) + metrics = getattr(_context, "current", None) if metrics is not None: metrics.record_custom_metric(name, value) + def internal_count_metric(name, count): """Create internal metric where only count has a value. All other fields have a value of 0. """ - count_metric = {'count': count} + count_metric = {"count": count} internal_metric(name, count_metric) diff --git a/newrelic/core/stats_engine.py b/newrelic/core/stats_engine.py index e5c39a2df2..fe8eadc711 100644 --- a/newrelic/core/stats_engine.py +++ b/newrelic/core/stats_engine.py @@ -43,6 +43,7 @@ create_agent_attributes, create_user_attributes, process_user_attribute, + resolve_logging_context_attributes, truncate, ) from newrelic.core.attribute_filter import DST_ERROR_COLLECTOR @@ -1223,7 +1224,7 @@ def record_transaction(self, transaction): ): self._log_events.merge(transaction.log_events, priority=transaction.priority) - def record_log_event(self, message, level=None, timestamp=None, priority=None): + def record_log_event(self, message, level=None, timestamp=None, attributes=None, priority=None): settings = self.__settings if not ( settings @@ -1236,18 +1237,62 @@ def record_log_event(self, message, level=None, timestamp=None, priority=None): timestamp = timestamp if timestamp is not None else time.time() level = str(level) if level is not None else "UNKNOWN" + context_attributes = attributes # Name reassigned for clarity - if not message or message.isspace(): - _logger.debug("record_log_event called where message was missing. No log event will be sent.") - return + # Unpack message and attributes from dict inputs + if isinstance(message, dict): + message_attributes = {k: v for k, v in message.items() if k != "message"} + message = message.get("message", "") + else: + message_attributes = None + + if message is not None: + # Coerce message into a string type + if not isinstance(message, six.string_types): + try: + message = str(message) + except Exception: + # Exit early for invalid message type after unpacking + _logger.debug( + "record_log_event called where message could not be converted to a string type. No log event will be sent." + ) + return + + # Truncate the now unpacked and string converted message + message = truncate(message, MAX_LOG_MESSAGE_LENGTH) + + # Collect attributes from linking metadata, context data, and message attributes + collected_attributes = {} + if settings and settings.application_logging.forwarding.context_data.enabled: + if context_attributes: + context_attributes = resolve_logging_context_attributes( + context_attributes, settings.attribute_filter, "context." + ) + if context_attributes: + collected_attributes.update(context_attributes) + + if message_attributes: + message_attributes = resolve_logging_context_attributes( + message_attributes, settings.attribute_filter, "message." + ) + if message_attributes: + collected_attributes.update(message_attributes) + + # Exit early if no message or attributes found after filtering + if (not message or message.isspace()) and not context_attributes and not message_attributes: + _logger.debug( + "record_log_event called where no message and no attributes were found. No log event will be sent." + ) + return - message = truncate(message, MAX_LOG_MESSAGE_LENGTH) + # Finally, add in linking attributes after checking that there is a valid message or at least 1 attribute + collected_attributes.update(get_linking_metadata()) event = LogEventNode( timestamp=timestamp, level=level, message=message, - attributes=get_linking_metadata(), + attributes=collected_attributes, ) if priority is None: diff --git a/newrelic/core/transaction_node.py b/newrelic/core/transaction_node.py index d63d7f9b65..74216f7df2 100644 --- a/newrelic/core/transaction_node.py +++ b/newrelic/core/transaction_node.py @@ -22,6 +22,7 @@ import newrelic.core.error_collector import newrelic.core.trace_node +from newrelic.common.encoding_utils import camel_case from newrelic.common.streaming_utils import SpanProtoAttrs from newrelic.core.attribute import create_agent_attributes, create_user_attributes from newrelic.core.attribute_filter import ( @@ -76,6 +77,10 @@ "synthetics_job_id", "synthetics_monitor_id", "synthetics_header", + "synthetics_type", + "synthetics_initiator", + "synthetics_attributes", + "synthetics_info_header", "is_part_of_cat", "trip_id", "path_hash", @@ -586,6 +591,15 @@ def _event_intrinsics(self, stats_table): intrinsics["nr.syntheticsJobId"] = self.synthetics_job_id intrinsics["nr.syntheticsMonitorId"] = self.synthetics_monitor_id + if self.synthetics_type: + intrinsics["nr.syntheticsType"] = self.synthetics_type + intrinsics["nr.syntheticsInitiator"] = self.synthetics_initiator + if self.synthetics_attributes: + # Add all synthetics attributes + for k, v in self.synthetics_attributes.items(): + if k: + intrinsics["nr.synthetics%s" % camel_case(k, upper=True)] = v + def _add_call_time(source, target): # include time for keys previously added to stats table via # stats_engine.record_transaction diff --git a/newrelic/hooks/application_celery.py b/newrelic/hooks/application_celery.py index 12f41d8d0d..ab7ca9e95c 100644 --- a/newrelic/hooks/application_celery.py +++ b/newrelic/hooks/application_celery.py @@ -26,7 +26,8 @@ from newrelic.api.background_task import BackgroundTask from newrelic.api.function_trace import FunctionTrace from newrelic.api.pre_function import wrap_pre_function -from newrelic.api.object_wrapper import callable_name, ObjectWrapper +from newrelic.common.object_names import callable_name +from newrelic.common.object_wrapper import FunctionWrapper from newrelic.api.transaction import current_transaction from newrelic.core.agent import shutdown_agent @@ -98,10 +99,6 @@ def _application(): with BackgroundTask(_application(), _name, 'Celery', source=instance): return wrapped(*args, **kwargs) - # Start Hotfix v2.2.1. - # obj = ObjectWrapper(wrapped, None, wrapper) - # End Hotfix v2.2.1. - # Celery tasks that inherit from celery.app.task must implement a run() # method. # ref: (http://docs.celeryproject.org/en/2.5/reference/ @@ -110,11 +107,11 @@ def _application(): # task. But celery does a micro-optimization where if the __call__ method # was not overridden by an inherited task, then it will directly execute # the run() method without going through the __call__ method. Our - # instrumentation via ObjectWrapper() relies on __call__ being called which + # instrumentation via FunctionWrapper() relies on __call__ being called which # in turn executes the wrapper() function defined above. Since the micro # optimization bypasses __call__ method it breaks our instrumentation of # celery. To circumvent this problem, we added a run() attribute to our - # ObjectWrapper which points to our __call__ method. This causes Celery + # FunctionWrapper which points to our __call__ method. This causes Celery # to execute our __call__ method which in turn applies the wrapper # correctly before executing the task. # @@ -122,17 +119,11 @@ def _application(): # versions included a monkey-patching provision which did not perform this # optimization on functions that were monkey-patched. - # Start Hotfix v2.2.1. - # obj.__dict__['run'] = obj.__call__ - - class _ObjectWrapper(ObjectWrapper): + class TaskWrapper(FunctionWrapper): def run(self, *args, **kwargs): return self.__call__(*args, **kwargs) - obj = _ObjectWrapper(wrapped, None, wrapper) - # End Hotfix v2.2.1. - - return obj + return TaskWrapper(wrapped, wrapper) def instrument_celery_app_task(module): diff --git a/newrelic/hooks/component_piston.py b/newrelic/hooks/component_piston.py index 78b975ed53..96204f404c 100644 --- a/newrelic/hooks/component_piston.py +++ b/newrelic/hooks/component_piston.py @@ -16,14 +16,15 @@ import newrelic.api.transaction import newrelic.api.function_trace -import newrelic.api.object_wrapper +import newrelic.common.object_wrapper +from newrelic.common.object_names import callable_name import newrelic.api.in_function class MethodWrapper(object): def __init__(self, wrapped, priority=None): - self._nr_name = newrelic.api.object_wrapper.callable_name(wrapped) + self._nr_name = callable_name(wrapped) self._nr_wrapped = wrapped self._nr_priority = priority @@ -76,7 +77,7 @@ def __call__(self, *args, **kwargs): def instrument_piston_resource(module): - newrelic.api.object_wrapper.wrap_object(module, + newrelic.common.object_wrapper.wrap_object(module, 'Resource.__init__', ResourceInitWrapper) diff --git a/newrelic/hooks/component_tastypie.py b/newrelic/hooks/component_tastypie.py index 8cc251916c..da93efbfb3 100644 --- a/newrelic/hooks/component_tastypie.py +++ b/newrelic/hooks/component_tastypie.py @@ -12,13 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. -import sys - from newrelic.api.function_trace import FunctionTraceWrapper -from newrelic.api.object_wrapper import ObjectWrapper, callable_name +from newrelic.common.object_names import callable_name +from newrelic.common.object_wrapper import wrap_function_wrapper, function_wrapper from newrelic.api.transaction import current_transaction from newrelic.api.time_trace import notice_error -from newrelic.common.object_wrapper import wrap_function_wrapper def _nr_wrap_handle_exception(wrapped, instance, args, kwargs): @@ -56,6 +54,7 @@ def outer_fn_wrapper(outer_fn, instance, args, kwargs): name = callable_name(callback) group = None + @function_wrapper def inner_fn_wrapper(inner_fn, instance, args, kwargs): transaction = current_transaction() @@ -69,18 +68,14 @@ def inner_fn_wrapper(inner_fn, instance, args, kwargs): result = outer_fn(*args, **kwargs) - return ObjectWrapper(result, None, inner_fn_wrapper) + return inner_fn_wrapper(result) def instrument_tastypie_resources(module): - _wrap_view = module.Resource.wrap_view - module.Resource.wrap_view = ObjectWrapper( - _wrap_view, None, outer_fn_wrapper) + wrap_function_wrapper(module, "Resource.wrap_view", outer_fn_wrapper) - wrap_function_wrapper(module, 'Resource._handle_500', - _nr_wrap_handle_exception) + wrap_function_wrapper(module, 'Resource._handle_500', _nr_wrap_handle_exception) def instrument_tastypie_api(module): - _wrap_view = module.Api.wrap_view - module.Api.wrap_view = ObjectWrapper(_wrap_view, None, outer_fn_wrapper) + wrap_function_wrapper(module, "Api.wrap_view", outer_fn_wrapper) diff --git a/newrelic/hooks/external_botocore.py b/newrelic/hooks/external_botocore.py index 12bdfcafe2..2a327a84a0 100644 --- a/newrelic/hooks/external_botocore.py +++ b/newrelic/hooks/external_botocore.py @@ -158,7 +158,6 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): input_message_list = [{"role": "user", "content": request_body.get("inputText", "")}] - chat_completion_summary_dict = { "request.max_tokens": request_config.get("maxTokenCount", ""), "request.temperature": request_config.get("temperature", ""), @@ -170,7 +169,9 @@ def extract_bedrock_titan_text_model(request_body, response_body=None): completion_tokens = sum(result["tokenCount"] for result in response_body.get("results", [])) total_tokens = input_tokens + completion_tokens - output_message_list = [{"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", [])] + output_message_list = [ + {"role": "assistant", "content": result["outputText"]} for result in response_body.get("results", []) + ] chat_completion_summary_dict.update( { @@ -218,7 +219,9 @@ def extract_bedrock_ai21_j2_model(request_body, response_body=None): } if response_body: - output_message_list =[{"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", [])] + output_message_list = [ + {"role": "assistant", "content": result["data"]["text"]} for result in response_body.get("completions", []) + ] chat_completion_summary_dict.update( { @@ -275,7 +278,9 @@ def extract_bedrock_cohere_model(request_body, response_body=None): } if response_body: - output_message_list = [{"role": "assistant", "content": result["text"]} for result in response_body.get("generations", [])] + output_message_list = [ + {"role": "assistant", "content": result["text"]} for result in response_body.get("generations", []) + ] chat_completion_summary_dict.update( { "response.choices.finish_reason": response_body["generations"][0]["finish_reason"], @@ -377,13 +382,31 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): if operation == "embedding": # Only available embedding models handle_embedding_event( - instance, transaction, extractor, model, None, None, request_body, - ft.duration, True, trace_id, span_id + instance, + transaction, + extractor, + model, + None, + None, + request_body, + ft.duration, + True, + trace_id, + span_id, ) else: handle_chat_completion_event( - instance, transaction, extractor, model, None, None, request_body, - ft.duration, True, trace_id, span_id + instance, + transaction, + extractor, + model, + None, + None, + request_body, + ft.duration, + True, + trace_id, + span_id, ) finally: @@ -430,7 +453,17 @@ def wrap_bedrock_runtime_invoke_model(wrapped, instance, args, kwargs): def handle_embedding_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id + client, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + duration, + is_error, + trace_id, + span_id, ): embedding_id = str(uuid.uuid4()) @@ -465,7 +498,17 @@ def handle_embedding_event( def handle_chat_completion_event( - client, transaction, extractor, model, response_body, response_headers, request_body, duration, is_error, trace_id, span_id + client, + transaction, + extractor, + model, + response_body, + response_headers, + request_body, + duration, + is_error, + trace_id, + span_id, ): custom_attrs_dict = transaction._custom_params conversation_id = custom_attrs_dict.get("conversation_id", "") diff --git a/newrelic/hooks/external_feedparser.py b/newrelic/hooks/external_feedparser.py index 13f9ebd63e..7f23cd6af5 100644 --- a/newrelic/hooks/external_feedparser.py +++ b/newrelic/hooks/external_feedparser.py @@ -13,24 +13,22 @@ # limitations under the License. import sys -import types +import newrelic.api.external_trace +import newrelic.api.object_wrapper +import newrelic.api.transaction +import newrelic.common.object_wrapper import newrelic.packages.six as six -import newrelic.api.transaction -import newrelic.api.object_wrapper -import newrelic.api.external_trace class capture_external_trace(object): - def __init__(self, wrapped): newrelic.api.object_wrapper.update_wrapper(self, wrapped) self._nr_next_object = wrapped - if not hasattr(self, '_nr_last_object'): + if not hasattr(self, "_nr_last_object"): self._nr_last_object = wrapped def __call__(self, url, *args, **kwargs): - # The URL be a string or a file like object. Pass call # through if not a string. @@ -43,16 +41,15 @@ def __call__(self, url, *args, **kwargs): parsed_url = url - if parsed_url.startswith('feed:http'): + if parsed_url.startswith("feed:http"): parsed_url = parsed_url[5:] - elif parsed_url.startswith('feed:'): - parsed_url = 'http:' + url[5:] + elif parsed_url.startswith("feed:"): + parsed_url = "http:" + url[5:] - if parsed_url.split(':')[0].lower() in ['http', 'https', 'ftp']: + if parsed_url.split(":")[0].lower() in ["http", "https", "ftp"]: current_transaction = newrelic.api.transaction.current_transaction() if current_transaction: - trace = newrelic.api.external_trace.ExternalTrace( - 'feedparser', parsed_url, 'GET') + trace = newrelic.api.external_trace.ExternalTrace("feedparser", parsed_url, "GET") context_manager = trace.__enter__() try: result = self._nr_next_object(url, *args, **kwargs) @@ -67,8 +64,8 @@ def __call__(self, url, *args, **kwargs): return self._nr_next_object(url, *args, **kwargs) def __getattr__(self, name): - return getattr(self._nr_next_object, name) + return getattr(self._nr_next_object, name) + def instrument(module): - newrelic.api.object_wrapper.wrap_object( - module, 'parse', capture_external_trace) + newrelic.common.object_wrapper.wrap_object(module, "parse", capture_external_trace) diff --git a/newrelic/hooks/external_httplib.py b/newrelic/hooks/external_httplib.py index 7d322f7194..ca8decb40c 100644 --- a/newrelic/hooks/external_httplib.py +++ b/newrelic/hooks/external_httplib.py @@ -18,7 +18,7 @@ from newrelic.api.external_trace import ExternalTrace from newrelic.api.transaction import current_transaction -from newrelic.common.object_wrapper import ObjectWrapper +from newrelic.common.object_wrapper import wrap_function_wrapper def httplib_endheaders_wrapper(wrapped, instance, args, kwargs, @@ -125,24 +125,7 @@ def instrument(module): else: library = 'http' - module.HTTPConnection.endheaders = ObjectWrapper( - module.HTTPConnection.endheaders, - None, - functools.partial(httplib_endheaders_wrapper, scheme='http', - library=library)) - - module.HTTPSConnection.endheaders = ObjectWrapper( - module.HTTPConnection.endheaders, - None, - functools.partial(httplib_endheaders_wrapper, scheme='https', - library=library)) - - module.HTTPConnection.getresponse = ObjectWrapper( - module.HTTPConnection.getresponse, - None, - httplib_getresponse_wrapper) - - module.HTTPConnection.putheader = ObjectWrapper( - module.HTTPConnection.putheader, - None, - httplib_putheader_wrapper) + wrap_function_wrapper(module, "HTTPConnection.endheaders", functools.partial(httplib_endheaders_wrapper, scheme='http', library=library)) + wrap_function_wrapper(module, "HTTPSConnection.endheaders", functools.partial(httplib_endheaders_wrapper, scheme='https', library=library)) + wrap_function_wrapper(module, "HTTPConnection.getresponse", httplib_getresponse_wrapper) + wrap_function_wrapper(module, "HTTPConnection.putheader", httplib_putheader_wrapper) diff --git a/newrelic/hooks/framework_django.py b/newrelic/hooks/framework_django.py index 3d9f448cc2..91d6fec200 100644 --- a/newrelic/hooks/framework_django.py +++ b/newrelic/hooks/framework_django.py @@ -16,6 +16,7 @@ import logging import sys import threading +import warnings from newrelic.api.application import register_application from newrelic.api.background_task import BackgroundTaskWrapper @@ -91,7 +92,6 @@ def _setting_set(value): def should_add_browser_timing(response, transaction): - # Don't do anything if receive a streaming response which # was introduced in Django 1.5. Need to avoid this as there # will be no 'content' attribute. Alternatively there may be @@ -111,7 +111,7 @@ def should_add_browser_timing(response, transaction): if not transaction or not transaction.enabled: return False - # Only insert RUM JavaScript headers and footers if enabled + # Only insert RUM JavaScript headers if enabled # in configuration and not already likely inserted. if not transaction.settings.browser_monitoring.enabled: @@ -152,38 +152,21 @@ def should_add_browser_timing(response, transaction): return True -# Response middleware for automatically inserting RUM header and -# footer into HTML response returned by application +# Response middleware for automatically inserting RUM header into HTML response returned by application def browser_timing_insertion(response, transaction): - - # No point continuing if header is empty. This can occur if - # RUM is not enabled within the UI. It is assumed at this - # point that if header is not empty, then footer will not be - # empty. We don't want to generate the footer just yet as - # want to do that as late as possible so that application - # server time in footer is as accurate as possible. In - # particular, if the response content is generated on demand - # then the flattening of the response could take some time - # and we want to track that. We thus generate footer below - # at point of insertion. - - header = transaction.browser_timing_header() - - if not header: - return response - - def html_to_be_inserted(): - return six.b(header) + six.b(transaction.browser_timing_footer()) - - # Make sure we flatten any content first as it could be - # stored as a list of strings in the response object. We - # assign it back to the response object to avoid having - # multiple copies of the string in memory at the same time + # No point continuing if header is empty. This can occur if RUM is not enabled within the UI. We don't want to + # generate the header just yet as we want to do that as late as possible so that application server time in header + # is as accurate as possible. In particular, if the response content is generated on demand then the flattening + # of the response could take some time and we want to track that. We thus generate header below at + # the point of insertion. + + # Make sure we flatten any content first as it could be stored as a list of strings in the response object. We + # assign it back to the response object to avoid having multiple copies of the string in memory at the same time # as we progress through steps below. - result = insert_html_snippet(response.content, html_to_be_inserted) + result = insert_html_snippet(response.content, lambda: six.b(transaction.browser_timing_header())) if result is not None: if transaction.settings.debug.log_autorum_middleware: @@ -200,10 +183,8 @@ def html_to_be_inserted(): return response -# Template tag functions for manually inserting RUM header and -# footer into HTML response. A template tag library for -# 'newrelic' will be automatically inserted into set of tag -# libraries when performing step to instrument the middleware. +# Template tag functions for manually inserting RUM header into HTML response. A template tag library for 'newrelic' +# will be automatically inserted into set of tag libraries when performing step to instrument the middleware. def newrelic_browser_timing_header(): @@ -214,10 +195,11 @@ def newrelic_browser_timing_header(): def newrelic_browser_timing_footer(): - from django.utils.safestring import mark_safe - - transaction = current_transaction() - return transaction and mark_safe(transaction.browser_timing_footer()) or "" # nosec + warnings.warn( + "The newrelic_browser_timing_footer function is deprecated. Please migrate to only using the newrelic_browser_timing_header API instead.", + DeprecationWarning, + ) + return "" # nosec # Addition of instrumentation for middleware. Can only do this @@ -228,7 +210,6 @@ def newrelic_browser_timing_footer(): def wrap_leading_middleware(middleware): - # Wrapper to be applied to middleware executed prior to the # view handler being executed. Records the time spent in the # middleware as separate function node and also attempts to @@ -276,7 +257,6 @@ def wrapper(wrapped, instance, args, kwargs): # functionality, so instead of removing this instrumentation, this # will be excluded from the coverage analysis. def wrap_view_middleware(middleware): # pragma: no cover - # This is no longer being used. The changes to strip the # wrapper from the view handler when passed into the function # urlresolvers.reverse() solves most of the problems. To back @@ -342,7 +322,6 @@ def _wrapped(request, view_func, view_args, view_kwargs): def wrap_trailing_middleware(middleware): - # Wrapper to be applied to trailing middleware executed # after the view handler. Records the time spent in the # middleware as separate function node. Transaction is never @@ -358,7 +337,6 @@ def wrap_trailing_middleware(middleware): def insert_and_wrap_middleware(handler, *args, **kwargs): - # Use lock to control access by single thread but also as # flag to indicate if done the initialisation. Lock will be # None if have already done this. @@ -383,7 +361,6 @@ def insert_and_wrap_middleware(handler, *args, **kwargs): middleware_instrumentation_lock = None try: - # Wrap the middleware to undertake timing and name # the web transaction. The naming is done as lower # priority than that for view handler so view handler @@ -411,7 +388,6 @@ def insert_and_wrap_middleware(handler, *args, **kwargs): def _nr_wrapper_GZipMiddleware_process_response_(wrapped, instance, args, kwargs): - transaction = current_transaction() if transaction is None: @@ -454,7 +430,6 @@ def _nr_wrapper_BaseHandler_get_response_(wrapped, instance, args, kwargs): def instrument_django_core_handlers_base(module): - # Attach a post function to load_middleware() method of # BaseHandler to trigger insertion of browser timing # middleware and wrapping of middleware for timing etc. @@ -468,12 +443,10 @@ def instrument_django_core_handlers_base(module): def instrument_django_gzip_middleware(module): - wrap_function_wrapper(module, "GZipMiddleware.process_response", _nr_wrapper_GZipMiddleware_process_response_) def wrap_handle_uncaught_exception(middleware): - # Wrapper to be applied to handler called when exceptions # propagate up to top level from middleware. Records the # time spent in the handler as separate function node. Names @@ -506,7 +479,6 @@ def _wrapped(request, resolver, exc_info): def instrument_django_core_handlers_wsgi(module): - # Wrap the WSGI application entry point. If this is also # wrapped from the WSGI script file or by the WSGI hosting # mechanism then those will take precedence. @@ -532,7 +504,6 @@ def instrument_django_core_handlers_wsgi(module): def wrap_view_handler(wrapped, priority=3): - # Ensure we don't wrap the view handler more than once. This # looks like it may occur in cases where the resolver is # called recursively. We flag that view handler was wrapped @@ -574,7 +545,6 @@ def wrapper(wrapped, instance, args, kwargs): def wrap_url_resolver(wrapped): - # Wrap URL resolver. If resolver returns valid result then # wrap the view handler returned. The type of the result # changes across Django versions so need to check and adapt @@ -624,7 +594,6 @@ def _wrapped(path): def wrap_url_resolver_nnn(wrapped, priority=1): - # Wrapper to be applied to the URL resolver for errors. name = callable_name(wrapped) @@ -647,7 +616,6 @@ def wrapper(wrapped, instance, args, kwargs): def wrap_url_reverse(wrapped): - # Wrap the URL resolver reverse lookup. Where the view # handler is passed in we need to strip any instrumentation # wrapper to ensure that it doesn't interfere with the @@ -667,7 +635,6 @@ def execute(viewname, *args, **kwargs): def instrument_django_core_urlresolvers(module): - # Wrap method which maps a string version of a function # name as used in urls.py pattern so can capture any # exception which is raised during that process. @@ -719,7 +686,6 @@ def instrument_django_core_urlresolvers(module): def instrument_django_urls_base(module): - # Wrap function for performing reverse URL lookup to strip any # instrumentation wrapper when view handler is passed in. @@ -728,7 +694,6 @@ def instrument_django_urls_base(module): def instrument_django_template(module): - # Wrap methods for rendering of Django templates. The name # of the method changed in between Django versions so need # to check for which one we have. The name of the function @@ -753,8 +718,7 @@ def template_name(template, *args): if not hasattr(module, "libraries"): return - # Register template tags used for manual insertion of RUM - # header and footer. + # Register template tags used for manual insertion of RUM header. # # TODO This can now be installed as a separate tag library # so should possibly look at deprecating this automatic @@ -775,7 +739,6 @@ def wrapper(wrapped, instance, args, kwargs): def instrument_django_template_loader_tags(module): - # Wrap template block node for timing, naming the node after # the block name as defined in the template rather than # function name. @@ -784,7 +747,6 @@ def instrument_django_template_loader_tags(module): def instrument_django_core_servers_basehttp(module): - # Allow 'runserver' to be used with Django <= 1.3. To do # this we wrap the WSGI application argument on the way in # so that the run() method gets the wrapped instance. @@ -819,7 +781,6 @@ def wrap_wsgi_application_entry_point(server, application, **kwargs): ) if not hasattr(module, "simple_server") and hasattr(module.ServerHandler, "run"): - # Patch the server to make it work properly. def run(self, application): @@ -869,7 +830,6 @@ def instrument_django_contrib_staticfiles_handlers(module): def instrument_django_views_debug(module): - # Wrap methods for handling errors when Django debug # enabled. For 404 we give this higher naming priority over # any prior middleware or view handler to give them @@ -896,7 +856,6 @@ def resolve_view_handler(view, request): def wrap_view_dispatch(wrapped): - # Wrapper to be applied to dispatcher for class based views. def wrapper(wrapped, instance, args, kwargs): @@ -996,7 +955,6 @@ def instrument_django_core_management_base(module): @function_wrapper def _nr_wrapper_django_inclusion_tag_wrapper_(wrapped, instance, args, kwargs): - name = hasattr(wrapped, "__name__") and wrapped.__name__ if name is None: @@ -1025,13 +983,11 @@ def _bind_params(func, *args, **kwargs): def _nr_wrapper_django_template_base_Library_inclusion_tag_(wrapped, instance, args, kwargs): - return _nr_wrapper_django_inclusion_tag_decorator_(wrapped(*args, **kwargs)) @function_wrapper def _nr_wrapper_django_template_base_InclusionNode_render_(wrapped, instance, args, kwargs): - if wrapped.__self__ is None: return wrapped(*args, **kwargs) @@ -1046,7 +1002,6 @@ def _nr_wrapper_django_template_base_InclusionNode_render_(wrapped, instance, ar def _nr_wrapper_django_template_base_generic_tag_compiler_(wrapped, instance, args, kwargs): - if wrapped.__code__.co_argcount > 6: # Django > 1.3. @@ -1083,7 +1038,6 @@ def _bind_params(name=None, compile_function=None, *args, **kwargs): return wrapped(*args, **kwargs) def _get_node_class(compile_function): - node_class = None # Django >= 1.4 uses functools.partial @@ -1099,7 +1053,6 @@ def _get_node_class(compile_function): and hasattr(compile_function, "__name__") and compile_function.__name__ == "_curried" ): - # compile_function here is generic_tag_compiler(), which has been # curried. To get node_class, we first get the function obj, args, # and kwargs of the curried function from the cells in @@ -1154,7 +1107,6 @@ def instrument_django_template_base(module): settings = global_settings() if "django.instrumentation.inclusion-tags.r1" in settings.feature_flag: - if hasattr(module, "generic_tag_compiler"): wrap_function_wrapper( module, "generic_tag_compiler", _nr_wrapper_django_template_base_generic_tag_compiler_ @@ -1197,7 +1149,6 @@ def _bind_params(original_middleware, *args, **kwargs): def instrument_django_core_handlers_exception(module): - if hasattr(module, "convert_exception_to_response"): wrap_function_wrapper(module, "convert_exception_to_response", _nr_wrapper_convert_exception_to_response_) diff --git a/newrelic/hooks/framework_pylons.py b/newrelic/hooks/framework_pylons.py index 9c5c457cd7..2832261668 100644 --- a/newrelic/hooks/framework_pylons.py +++ b/newrelic/hooks/framework_pylons.py @@ -16,14 +16,15 @@ import newrelic.api.transaction_name import newrelic.api.function_trace import newrelic.api.error_trace -import newrelic.api.object_wrapper +import newrelic.common.object_wrapper +from newrelic.common.object_names import callable_name import newrelic.api.import_hook from newrelic.api.time_trace import notice_error def name_controller(self, environ, start_response): action = environ['pylons.routes_dict']['action'] - return "%s.%s" % (newrelic.api.object_wrapper.callable_name(self), action) + return "%s.%s" % (callable_name(self), action) class capture_error(object): def __init__(self, wrapped): @@ -69,12 +70,12 @@ def instrument(module): module, 'WSGIController.__call__') def name_WSGIController_perform_call(self, func, args): - return newrelic.api.object_wrapper.callable_name(func) + return callable_name(func) newrelic.api.function_trace.wrap_function_trace( module, 'WSGIController._perform_call', name_WSGIController_perform_call) - newrelic.api.object_wrapper.wrap_object( + newrelic.common.object_wrapper.wrap_object( module, 'WSGIController._perform_call', capture_error) elif module.__name__ == 'pylons.templating': diff --git a/newrelic/hooks/framework_pyramid.py b/newrelic/hooks/framework_pyramid.py index 996ebb372d..ba5e5e07af 100644 --- a/newrelic/hooks/framework_pyramid.py +++ b/newrelic/hooks/framework_pyramid.py @@ -53,17 +53,11 @@ wrap_function_wrapper, wrap_out_function, ) +from newrelic.common.package_version_utils import get_package_version def instrument_pyramid_router(module): - pyramid_version = None - - try: - import pkg_resources - - pyramid_version = pkg_resources.get_distribution("pyramid").version - except Exception: - pass + pyramid_version = get_package_version("pyramid") wrap_wsgi_application(module, "Router.__call__", framework=("Pyramid", pyramid_version)) diff --git a/newrelic/hooks/framework_web2py.py b/newrelic/hooks/framework_web2py.py index e9785e02f5..aeb22bd84a 100644 --- a/newrelic/hooks/framework_web2py.py +++ b/newrelic/hooks/framework_web2py.py @@ -22,6 +22,7 @@ import newrelic.api.function_trace import newrelic.api.transaction_name import newrelic.api.object_wrapper +import newrelic.common.object_wrapper import newrelic.api.pre_function from newrelic.api.time_trace import notice_error @@ -132,7 +133,7 @@ def __call__(self, request, response, session): def __getattr__(self, name): return getattr(self._nr_next_object, name) - newrelic.api.object_wrapper.wrap_object( + newrelic.common.object_wrapper.wrap_object( module, 'serve_controller', error_serve_controller) def instrument_gluon_template(module): diff --git a/newrelic/hooks/framework_webpy.py b/newrelic/hooks/framework_webpy.py index c1785a89f3..8f7226862e 100644 --- a/newrelic/hooks/framework_webpy.py +++ b/newrelic/hooks/framework_webpy.py @@ -12,18 +12,16 @@ # See the License for the specific language governing permissions and # limitations under the License. -import sys - -import newrelic.packages.six as six - -import newrelic.api.transaction import newrelic.api.function_trace import newrelic.api.in_function import newrelic.api.out_function import newrelic.api.pre_function -from newrelic.api.object_wrapper import callable_name -from newrelic.api.wsgi_application import WSGIApplicationWrapper +import newrelic.api.transaction +import newrelic.packages.six as six from newrelic.api.time_trace import notice_error +from newrelic.api.wsgi_application import WSGIApplicationWrapper +from newrelic.common.object_names import callable_name + def transaction_name_delegate(*args, **kwargs): transaction = newrelic.api.transaction.current_transaction() @@ -35,24 +33,22 @@ def transaction_name_delegate(*args, **kwargs): transaction.set_transaction_name(f) return (args, kwargs) + def wrap_handle_exception(self): transaction = newrelic.api.transaction.current_transaction() if transaction: notice_error() + def template_name(render_obj, name): return name + def instrument(module): + if module.__name__ == "web.application": + newrelic.api.out_function.wrap_out_function(module, "application.wsgifunc", WSGIApplicationWrapper) + newrelic.api.in_function.wrap_in_function(module, "application._delegate", transaction_name_delegate) + newrelic.api.pre_function.wrap_pre_function(module, "application.internalerror", wrap_handle_exception) - if module.__name__ == 'web.application': - newrelic.api.out_function.wrap_out_function( - module, 'application.wsgifunc', WSGIApplicationWrapper) - newrelic.api.in_function.wrap_in_function( - module, 'application._delegate', transaction_name_delegate) - newrelic.api.pre_function.wrap_pre_function( - module, 'application.internalerror', wrap_handle_exception) - - elif module.__name__ == 'web.template': - newrelic.api.function_trace.wrap_function_trace( - module, 'render.__getattr__', template_name, 'Template/Render') + elif module.__name__ == "web.template": + newrelic.api.function_trace.wrap_function_trace(module, "render.__getattr__", template_name, "Template/Render") diff --git a/newrelic/hooks/logger_logging.py b/newrelic/hooks/logger_logging.py index 67fb46525c..7b320cd911 100644 --- a/newrelic/hooks/logger_logging.py +++ b/newrelic/hooks/logger_logging.py @@ -24,6 +24,9 @@ from urllib.parse import quote +IGNORED_LOG_RECORD_KEYS = set(["message", "msg"]) + + def add_nr_linking_metadata(message): available_metadata = get_linking_metadata() entity_name = quote(available_metadata.get("entity.name", "")) @@ -74,8 +77,18 @@ def wrap_callHandlers(wrapped, instance, args, kwargs): if settings.application_logging.forwarding and settings.application_logging.forwarding.enabled: try: - message = record.getMessage() - record_log_event(message, level_name, int(record.created * 1000)) + message = record.msg + if not isinstance(message, dict): + # Allow python to convert the message to a string and template it with args. + message = record.getMessage() + + # Grab and filter context attributes from log record + record_attrs = vars(record) + context_attrs = {k: record_attrs[k] for k in record_attrs if k not in IGNORED_LOG_RECORD_KEYS} + + record_log_event( + message=message, level=level_name, timestamp=int(record.created * 1000), attributes=context_attrs + ) except Exception: pass diff --git a/newrelic/hooks/logger_loguru.py b/newrelic/hooks/logger_loguru.py index dc9843b204..2676859072 100644 --- a/newrelic/hooks/logger_loguru.py +++ b/newrelic/hooks/logger_loguru.py @@ -18,19 +18,24 @@ from newrelic.api.application import application_instance from newrelic.api.transaction import current_transaction, record_log_event from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version_tuple from newrelic.common.signature import bind_args from newrelic.core.config import global_settings from newrelic.hooks.logger_logging import add_nr_linking_metadata -from newrelic.packages import six _logger = logging.getLogger(__name__) -is_pypy = hasattr(sys, "pypy_version_info") +IS_PYPY = hasattr(sys, "pypy_version_info") +LOGURU_VERSION = get_package_version_tuple("loguru") +LOGURU_FILTERED_RECORD_ATTRS = {"extra", "message", "time", "level", "_nr_original_message", "record"} +ALLOWED_LOGURU_OPTIONS_LENGTHS = frozenset((8, 9)) -def loguru_version(): - from loguru import __version__ - return tuple(int(x) for x in __version__.split(".")) +def _filter_record_attributes(record): + attrs = {k: v for k, v in record.items() if k not in LOGURU_FILTERED_RECORD_ATTRS} + extra_attrs = dict(record.get("extra", {})) + attrs.update({"extra.%s" % k: v for k, v in extra_attrs.items()}) + return attrs def _nr_log_forwarder(message_instance): @@ -59,15 +64,17 @@ def _nr_log_forwarder(message_instance): application.record_custom_metric("Logging/lines/%s" % level_name, {"count": 1}) if settings.application_logging.forwarding and settings.application_logging.forwarding.enabled: + attrs = _filter_record_attributes(record) + try: - record_log_event(message, level_name, int(record["time"].timestamp())) + time = record.get("time", None) + if time: + time = int(time.timestamp()) + record_log_event(message, level_name, time, attributes=attrs) except Exception: pass -ALLOWED_LOGURU_OPTIONS_LENGTHS = frozenset((8, 9)) - - def wrap_log(wrapped, instance, args, kwargs): try: bound_args = bind_args(wrapped, args, kwargs) @@ -78,7 +85,7 @@ def wrap_log(wrapped, instance, args, kwargs): # Loguru looks into the stack trace to find the caller's module and function names. # options[1] tells loguru how far up to look in the stack trace to find the caller. # Because wrap_log is an extra call in the stack trace, loguru needs to look 1 level higher. - if not is_pypy: + if not IS_PYPY: options[1] += 1 else: # PyPy inspection requires an additional frame of offset, as the wrapt internals seem to @@ -109,7 +116,7 @@ def _nr_log_patcher(record): record["_nr_original_message"] = message = record["message"] record["message"] = add_nr_linking_metadata(message) - if loguru_version() > (0, 6, 0): + if LOGURU_VERSION > (0, 6, 0): if original_patcher is not None: patchers = [p for p in original_patcher] # Consumer iterable into list so we can modify # Wipe out reference so patchers aren't called twice, as the framework will handle calling other patchers. @@ -135,7 +142,7 @@ def patch_loguru_logger(logger): logger.add(_nr_log_forwarder, format="{message}") logger._core._nr_instrumented = True elif not hasattr(logger, "_nr_instrumented"): # pragma: no cover - for _, handler in six.iteritems(logger._handlers): + for _, handler in logger._handlers.items(): if handler._writer is _nr_log_forwarder: logger._nr_instrumented = True return diff --git a/newrelic/hooks/logger_structlog.py b/newrelic/hooks/logger_structlog.py index e652a795c8..abb1f44cb9 100644 --- a/newrelic/hooks/logger_structlog.py +++ b/newrelic/hooks/logger_structlog.py @@ -12,18 +12,23 @@ # See the License for the specific language governing permissions and # limitations under the License. -from newrelic.common.object_wrapper import wrap_function_wrapper +import functools + +from newrelic.api.application import application_instance from newrelic.api.transaction import current_transaction, record_log_event +from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.signature import bind_args from newrelic.core.config import global_settings -from newrelic.api.application import application_instance from newrelic.hooks.logger_logging import add_nr_linking_metadata +@functools.lru_cache(maxsize=None) def normalize_level_name(method_name): # Look up level number for method name, using result to look up level name for that level number. # Convert result to upper case, and default to UNKNOWN in case of errors or missing values. try: from structlog._log_levels import _LEVEL_TO_NAME, _NAME_TO_LEVEL + return _LEVEL_TO_NAME[_NAME_TO_LEVEL[method_name]].upper() except Exception: return "UNKNOWN" @@ -33,14 +38,7 @@ def bind_process_event(method_name, event, event_kw): return method_name, event, event_kw -def wrap__process_event(wrapped, instance, args, kwargs): - try: - method_name, event, event_kw = bind_process_event(*args, **kwargs) - except TypeError: - return wrapped(*args, **kwargs) - - original_message = event # Save original undecorated message - +def new_relic_event_consumer(logger, level, event): transaction = current_transaction() if transaction: @@ -49,16 +47,27 @@ def wrap__process_event(wrapped, instance, args, kwargs): settings = global_settings() # Return early if application logging not enabled - if settings and settings.application_logging and settings.application_logging.enabled: - if settings.application_logging.local_decorating and settings.application_logging.local_decorating.enabled: - event = add_nr_linking_metadata(event) - - # Send log to processors for filtering, allowing any DropEvent exceptions that occur to prevent instrumentation from recording the log event. - result = wrapped(method_name, event, event_kw) - - level_name = normalize_level_name(method_name) - - if settings.application_logging.metrics and settings.application_logging.metrics.enabled: + if settings and settings.application_logging.enabled: + if isinstance(event, (str, bytes, bytearray)): + message = original_message = event + event_attrs = {} + elif isinstance(event, dict): + message = original_message = event.get("event", "") + event_attrs = {k: v for k, v in event.items() if k != "event"} + else: + # Unclear how to proceed, ignore log. Avoid logging an error message or we may incur an infinite loop. + return event + + if settings.application_logging.local_decorating.enabled: + message = add_nr_linking_metadata(message) + if isinstance(event, (str, bytes, bytearray)): + event = message + elif isinstance(event, dict) and "event" in event: + event["event"] = message + + level_name = normalize_level_name(level) + + if settings.application_logging.metrics.enabled: if transaction: transaction.record_custom_metric("Logging/lines", {"count": 1}) transaction.record_custom_metric("Logging/lines/%s" % level_name, {"count": 1}) @@ -68,19 +77,57 @@ def wrap__process_event(wrapped, instance, args, kwargs): application.record_custom_metric("Logging/lines", {"count": 1}) application.record_custom_metric("Logging/lines/%s" % level_name, {"count": 1}) - if settings.application_logging.forwarding and settings.application_logging.forwarding.enabled: + if settings.application_logging.forwarding.enabled: try: - record_log_event(original_message, level_name) + record_log_event(original_message, level_name, attributes=event_attrs) except Exception: pass - # Return the result from wrapped after we've recorded the resulting log event. - return result + return event + + +def wrap__process_event(wrapped, instance, args, kwargs): + transaction = current_transaction() + + if transaction: + settings = transaction.settings + else: + settings = global_settings() + + # Return early if application logging not enabled + if settings and settings.application_logging.enabled: + processors = instance._processors + if not processors: + instance._processors = [new_relic_event_consumer] + elif processors[-1] != new_relic_event_consumer: + # Remove our processor if it exists and add it to the end + if new_relic_event_consumer in processors: + processors.remove(new_relic_event_consumer) + processors.append(new_relic_event_consumer) return wrapped(*args, **kwargs) +def wrap__find_first_app_frame_and_name(wrapped, instance, args, kwargs): + try: + bound_args = bind_args(wrapped, args, kwargs) + if bound_args["additional_ignores"]: + bound_args["additional_ignores"] = list(bound_args["additional_ignores"]) + bound_args["additional_ignores"].append("newrelic") + else: + bound_args["additional_ignores"] = ["newrelic"] + except Exception: + return wrapped(*args, **kwargs) + + return wrapped(**bound_args) + + def instrument_structlog__base(module): if hasattr(module, "BoundLoggerBase") and hasattr(module.BoundLoggerBase, "_process_event"): wrap_function_wrapper(module, "BoundLoggerBase._process_event", wrap__process_event) + + +def instrument_structlog__frames(module): + if hasattr(module, "_find_first_app_frame_and_name"): + wrap_function_wrapper(module, "_find_first_app_frame_and_name", wrap__find_first_app_frame_and_name) diff --git a/newrelic/hooks/messagebroker_confluentkafka.py b/newrelic/hooks/messagebroker_confluentkafka.py index 81d9fa59af..b7c70a129d 100644 --- a/newrelic/hooks/messagebroker_confluentkafka.py +++ b/newrelic/hooks/messagebroker_confluentkafka.py @@ -65,10 +65,12 @@ def wrap_Producer_produce(wrapped, instance, args, kwargs): destination_type="Topic", destination_name=topic or "Default", source=wrapped, - ) as trace: - dt_headers = {k: v.encode("utf-8") for k, v in trace.generate_request_headers(transaction)} + ): + dt_headers = {k: v.encode("utf-8") for k, v in MessageTrace.generate_request_headers(transaction)} # headers can be a list of tuples or a dict so convert to dict for consistency. - dt_headers.update(dict(headers) if headers else {}) + if headers: + dt_headers.update(dict(headers)) + try: return wrapped(topic, headers=dt_headers, *args, **kwargs) except Exception as error: diff --git a/newrelic/hooks/messagebroker_kafkapython.py b/newrelic/hooks/messagebroker_kafkapython.py index 9124a16dcd..dff5e2c786 100644 --- a/newrelic/hooks/messagebroker_kafkapython.py +++ b/newrelic/hooks/messagebroker_kafkapython.py @@ -58,11 +58,16 @@ def wrap_KafkaProducer_send(wrapped, instance, args, kwargs): destination_name=topic or "Default", source=wrapped, terminal=False, - ) as trace: - dt_headers = [(k, v.encode("utf-8")) for k, v in trace.generate_request_headers(transaction)] - headers.extend(dt_headers) + ): + dt_headers = [(k, v.encode("utf-8")) for k, v in MessageTrace.generate_request_headers(transaction)] + # headers can be a list of tuples or a dict so convert to dict for consistency. + if headers: + dt_headers.extend(headers) + try: - return wrapped(topic, value=value, key=key, headers=headers, partition=partition, timestamp_ms=timestamp_ms) + return wrapped( + topic, value=value, key=key, headers=dt_headers, partition=partition, timestamp_ms=timestamp_ms + ) except Exception: notice_error() raise diff --git a/newrelic/hooks/messagebroker_pika.py b/newrelic/hooks/messagebroker_pika.py index d6120c10de..5396e38070 100644 --- a/newrelic/hooks/messagebroker_pika.py +++ b/newrelic/hooks/messagebroker_pika.py @@ -278,7 +278,7 @@ def _generator(generator): if any(exc): to_throw = exc exc = (None, None, None) - yielded = generator.throw(*to_throw) + yielded = generator.throw(to_throw[1]) else: yielded = generator.send(value) diff --git a/newrelic/hooks/middleware_flask_compress.py b/newrelic/hooks/middleware_flask_compress.py index 09e35b3cd2..078cc3d989 100644 --- a/newrelic/hooks/middleware_flask_compress.py +++ b/newrelic/hooks/middleware_flask_compress.py @@ -18,35 +18,41 @@ from newrelic.api.transaction import current_transaction from newrelic.common.object_wrapper import wrap_function_wrapper from newrelic.config import extra_settings - from newrelic.packages import six _logger = logging.getLogger(__name__) _boolean_states = { - '1': True, 'yes': True, 'true': True, 'on': True, - '0': False, 'no': False, 'false': False, 'off': False + "1": True, + "yes": True, + "true": True, + "on": True, + "0": False, + "no": False, + "false": False, + "off": False, } def _setting_boolean(value): if value.lower() not in _boolean_states: - raise ValueError('Not a boolean: %s' % value) + raise ValueError("Not a boolean: %s" % value) return _boolean_states[value.lower()] _settings_types = { - 'browser_monitoring.auto_instrument': _setting_boolean, - 'browser_monitoring.auto_instrument_passthrough': _setting_boolean, + "browser_monitoring.auto_instrument": _setting_boolean, + "browser_monitoring.auto_instrument_passthrough": _setting_boolean, } _settings_defaults = { - 'browser_monitoring.auto_instrument': True, - 'browser_monitoring.auto_instrument_passthrough': True, + "browser_monitoring.auto_instrument": True, + "browser_monitoring.auto_instrument_passthrough": True, } -flask_compress_settings = extra_settings('import-hook:flask_compress', - types=_settings_types, defaults=_settings_defaults) +flask_compress_settings = extra_settings( + "import-hook:flask_compress", types=_settings_types, defaults=_settings_defaults +) def _nr_wrapper_Compress_after_request(wrapped, instance, args, kwargs): @@ -62,7 +68,7 @@ def _params(response, *args, **kwargs): if not transaction: return wrapped(*args, **kwargs) - # Only insert RUM JavaScript headers and footers if enabled + # Only insert RUM JavaScript headers if enabled # in configuration and not already likely inserted. if not transaction.settings.browser_monitoring.enabled: @@ -83,45 +89,34 @@ def _params(response, *args, **kwargs): # a user may want to also perform insertion for # 'application/xhtml+xml'. - ctype = (response.mimetype or '').lower() + ctype = (response.mimetype or "").lower() if ctype not in transaction.settings.browser_monitoring.content_type: return wrapped(*args, **kwargs) # Don't risk it if content encoding already set. - if 'Content-Encoding' in response.headers: + if "Content-Encoding" in response.headers: return wrapped(*args, **kwargs) # Don't risk it if content is actually within an attachment. - cdisposition = response.headers.get('Content-Disposition', '').lower() + cdisposition = response.headers.get("Content-Disposition", "").lower() - if cdisposition.split(';')[0].strip() == 'attachment': + if cdisposition.split(";")[0].strip() == "attachment": return wrapped(*args, **kwargs) - # No point continuing if header is empty. This can occur if - # RUM is not enabled within the UI. It is assumed at this - # point that if header is not empty, then footer will not be - # empty. We don't want to generate the footer just yet as - # want to do that as late as possible so that application - # server time in footer is as accurate as possible. In - # particular, if the response content is generated on demand - # then the flattening of the response could take some time - # and we want to track that. We thus generate footer below - # at point of insertion. - - header = transaction.browser_timing_header() - - if not header: - return wrapped(*args, **kwargs) + # No point continuing if header is empty. This can occur if RUM is not enabled within the UI. We don't want to + # generate the header just yet as we want to do that as late as possible so that application server time in header + # is as accurate as possible. In particular, if the response content is generated on demand then the flattening + # of the response could take some time and we want to track that. We thus generate header below at + # the point of insertion. # If the response has direct_passthrough flagged, then is # likely to be streaming a file or other large response. - direct_passthrough = getattr(response, 'direct_passthrough', None) + direct_passthrough = getattr(response, "direct_passthrough", None) if direct_passthrough: - if not (flask_compress_settings. - browser_monitoring.auto_instrument_passthrough): + if not (flask_compress_settings.browser_monitoring.auto_instrument_passthrough): return wrapped(*args, **kwargs) # In those cases, if the mimetype is still a supported browser @@ -131,34 +126,31 @@ def _params(response, *args, **kwargs): # # In order to do that, we have to disable direct_passthrough on the # response since we have to immediately read the contents of the file. - elif ctype == 'text/html': + elif ctype == "text/html": response.direct_passthrough = False else: return wrapped(*args, **kwargs) - def html_to_be_inserted(): - return six.b(header) + six.b(transaction.browser_timing_footer()) - # Make sure we flatten any content first as it could be # stored as a list of strings in the response object. We # assign it back to the response object to avoid having # multiple copies of the string in memory at the same time # as we progress through steps below. - result = insert_html_snippet(response.get_data(), html_to_be_inserted) + result = insert_html_snippet(response.get_data(), lambda: six.b(transaction.browser_timing_header())) if result is not None: if transaction.settings.debug.log_autorum_middleware: - _logger.debug('RUM insertion from flask_compress ' - 'triggered. Bytes added was %r.', - len(result) - len(response.get_data())) + _logger.debug( + "RUM insertion from flask_compress " "triggered. Bytes added was %r.", + len(result) - len(response.get_data()), + ) response.set_data(result) - response.headers['Content-Length'] = str(len(response.get_data())) + response.headers["Content-Length"] = str(len(response.get_data())) return wrapped(*args, **kwargs) def instrument_flask_compress(module): - wrap_function_wrapper(module, 'Compress.after_request', - _nr_wrapper_Compress_after_request) + wrap_function_wrapper(module, "Compress.after_request", _nr_wrapper_Compress_after_request) diff --git a/newrelic/hooks/mlmodel_langchain.py b/newrelic/hooks/mlmodel_langchain.py index 2c501f27fe..81605f4c2a 100644 --- a/newrelic/hooks/mlmodel_langchain.py +++ b/newrelic/hooks/mlmodel_langchain.py @@ -50,6 +50,7 @@ "langchain_community.vectorstores.hippo": "Hippo", "langchain_community.vectorstores.hologres": "Hologres", "langchain_community.vectorstores.lancedb": "LanceDB", + "langchain_community.vectorstores.lantern": "Lantern", "langchain_community.vectorstores.llm_rails": "LLMRails", "langchain_community.vectorstores.marqo": "Marqo", "langchain_community.vectorstores.matching_engine": "MatchingEngine", @@ -256,6 +257,7 @@ def wrap_similarity_search(wrapped, instance, args, kwargs): def instrument_langchain_vectorstore_similarity_search(module): + print(module.__name__) vector_class = VECTORSTORE_CLASSES.get(module.__name__) if vector_class and hasattr(getattr(module, vector_class, ""), "similarity_search"): diff --git a/newrelic/hooks/mlmodel_openai.py b/newrelic/hooks/mlmodel_openai.py index 7ea277e766..35458131f1 100644 --- a/newrelic/hooks/mlmodel_openai.py +++ b/newrelic/hooks/mlmodel_openai.py @@ -113,7 +113,7 @@ def wrap_embedding_sync(wrapped, instance, args, kwargs): if not response: return response - response_headers = getattr(response, "_nr_response_headers", None) + response_headers = getattr(response, "_nr_response_headers", {}) # In v1, response objects are pydantic models so this function call converts the object back to a dictionary for backwards compatibility # Use standard response object returned from create call for v0 @@ -283,7 +283,7 @@ def wrap_chat_completion_sync(wrapped, instance, args, kwargs): return return_val # At this point, we have a response so we can grab attributes only available on the response object - response_headers = getattr(return_val, "_nr_response_headers", None) + response_headers = getattr(return_val, "_nr_response_headers", {}) # In v1, response objects are pydantic models so this function call converts the # object back to a dictionary for backwards compatibility. response = return_val @@ -570,7 +570,7 @@ async def wrap_embedding_async(wrapped, instance, args, kwargs): if not response: return response - response_headers = getattr(response, "_nr_response_headers", None) + response_headers = getattr(response, "_nr_response_headers", {}) # In v1, response objects are pydantic models so this function call converts the object back to a dictionary for backwards compatibility # Use standard response object returned from create call for v0 @@ -859,7 +859,7 @@ def bind_base_client_process_response( return response -def wrap_base_client_process_response(wrapped, instance, args, kwargs): +def wrap_base_client_process_response_sync(wrapped, instance, args, kwargs): response = bind_base_client_process_response(*args, **kwargs) nr_response_headers = getattr(response, "headers") @@ -869,6 +869,16 @@ def wrap_base_client_process_response(wrapped, instance, args, kwargs): return return_val +async def wrap_base_client_process_response_async(wrapped, instance, args, kwargs): + response = bind_base_client_process_response(*args, **kwargs) + nr_response_headers = getattr(response, "headers") + + return_val = await wrapped(*args, **kwargs) + # Obtain reponse headers for v1 + return_val._nr_response_headers = nr_response_headers + return return_val + + def instrument_openai_util(module): wrap_function_wrapper(module, "convert_to_openai_object", wrap_convert_to_openai_object) @@ -907,4 +917,9 @@ def instrument_openai_resources_embeddings(module): def instrument_openai_base_client(module): if hasattr(module.BaseClient, "_process_response"): - wrap_function_wrapper(module, "BaseClient._process_response", wrap_base_client_process_response) + wrap_function_wrapper(module, "BaseClient._process_response", wrap_base_client_process_response_sync) + else: + if hasattr(module.SyncAPIClient, "_process_response"): + wrap_function_wrapper(module, "SyncAPIClient._process_response", wrap_base_client_process_response_sync) + if hasattr(module.AsyncAPIClient, "_process_response"): + wrap_function_wrapper(module, "AsyncAPIClient._process_response", wrap_base_client_process_response_async) diff --git a/newrelic/hooks/template_genshi.py b/newrelic/hooks/template_genshi.py index abea1e485a..46e19e36ca 100644 --- a/newrelic/hooks/template_genshi.py +++ b/newrelic/hooks/template_genshi.py @@ -12,33 +12,40 @@ # See the License for the specific language governing permissions and # limitations under the License. -import types - -import newrelic.api.transaction -import newrelic.api.object_wrapper import newrelic.api.function_trace +import newrelic.api.transaction +import newrelic.common.object_wrapper + class stream_wrapper(object): def __init__(self, stream, filepath): self.__stream = stream self.__filepath = filepath + def render(self, *args, **kwargs): return newrelic.api.function_trace.FunctionTraceWrapper( - self.__stream.render, self.__filepath, - 'Template/Render')(*args, **kwargs) + self.__stream.render, self.__filepath, "Template/Render" + )(*args, **kwargs) + def __getattr__(self, name): return getattr(self.__stream, name) + def __iter__(self): return iter(self.__stream) + def __or__(self, function): return self.__stream.__or__(function) + def __str__(self): return self.__stream.__str__() + def __unicode__(self): return self.__stream.__unicode__() + def __html__(self): return self.__stream.__html__() + class wrap_template(object): def __init__(self, wrapped): if isinstance(wrapped, tuple): @@ -57,17 +64,14 @@ def __get__(self, instance, klass): def __call__(self, *args, **kwargs): current_transaction = newrelic.api.transaction.current_transaction() if current_transaction and self.__instance: - return stream_wrapper(self.__wrapped(*args, **kwargs), - self.__instance.filepath) + return stream_wrapper(self.__wrapped(*args, **kwargs), self.__instance.filepath) else: return self.__wrapped(*args, **kwargs) def __getattr__(self, name): return getattr(self.__wrapped, name) -def instrument(module): - - if module.__name__ == 'genshi.template.base': - newrelic.api.object_wrapper.wrap_object( - module, 'Template.generate', wrap_template) +def instrument(module): + if module.__name__ == "genshi.template.base": + newrelic.common.object_wrapper.wrap_object(module, "Template.generate", wrap_template) diff --git a/newrelic/hooks/template_mako.py b/newrelic/hooks/template_mako.py index 2e20da7306..1cd5bab16f 100644 --- a/newrelic/hooks/template_mako.py +++ b/newrelic/hooks/template_mako.py @@ -13,7 +13,7 @@ # limitations under the License. import newrelic.api.function_trace -import newrelic.api.object_wrapper +import newrelic.common.object_wrapper class TemplateRenderWrapper(object): @@ -42,7 +42,7 @@ def __call__(self, template, *args, **kwargs): def instrument_mako_runtime(module): - newrelic.api.object_wrapper.wrap_object(module, + newrelic.common.object_wrapper.wrap_object(module, '_render', TemplateRenderWrapper) def instrument_mako_template(module): diff --git a/newrelic/packages/wrapt/__init__.py b/newrelic/packages/wrapt/__init__.py index ee6539b774..ed31a94313 100644 --- a/newrelic/packages/wrapt/__init__.py +++ b/newrelic/packages/wrapt/__init__.py @@ -1,12 +1,15 @@ -__version_info__ = ('1', '14', '1') +__version_info__ = ('1', '16', '0') __version__ = '.'.join(__version_info__) -from .wrappers import (ObjectProxy, CallableObjectProxy, FunctionWrapper, - BoundFunctionWrapper, WeakFunctionProxy, PartialCallableObjectProxy, - resolve_path, apply_patch, wrap_object, wrap_object_attribute, +from .__wrapt__ import (ObjectProxy, CallableObjectProxy, FunctionWrapper, + BoundFunctionWrapper, PartialCallableObjectProxy) + +from .patches import (resolve_path, apply_patch, wrap_object, wrap_object_attribute, function_wrapper, wrap_function_wrapper, patch_function_wrapper, transient_function_wrapper) +from .weakrefs import WeakFunctionProxy + from .decorators import (adapter_factory, AdapterFactory, decorator, synchronized) diff --git a/newrelic/packages/wrapt/__wrapt__.py b/newrelic/packages/wrapt/__wrapt__.py new file mode 100644 index 0000000000..9933b2c972 --- /dev/null +++ b/newrelic/packages/wrapt/__wrapt__.py @@ -0,0 +1,14 @@ +import os + +from .wrappers import (ObjectProxy, CallableObjectProxy, + PartialCallableObjectProxy, FunctionWrapper, + BoundFunctionWrapper, _FunctionWrapperBase) + +try: + if not os.environ.get('WRAPT_DISABLE_EXTENSIONS'): + from ._wrappers import (ObjectProxy, CallableObjectProxy, + PartialCallableObjectProxy, FunctionWrapper, + BoundFunctionWrapper, _FunctionWrapperBase) + +except ImportError: + pass diff --git a/newrelic/packages/wrapt/_wrappers.c b/newrelic/packages/wrapt/_wrappers.c index 67c5d5e1af..e0e1b5bc65 100644 --- a/newrelic/packages/wrapt/_wrappers.c +++ b/newrelic/packages/wrapt/_wrappers.c @@ -1139,6 +1139,30 @@ static int WraptObjectProxy_setitem(WraptObjectProxyObject *self, /* ------------------------------------------------------------------------- */ +static PyObject *WraptObjectProxy_self_setattr( + WraptObjectProxyObject *self, PyObject *args) +{ + PyObject *name = NULL; + PyObject *value = NULL; + +#if PY_MAJOR_VERSION >= 3 + if (!PyArg_ParseTuple(args, "UO:__self_setattr__", &name, &value)) + return NULL; +#else + if (!PyArg_ParseTuple(args, "SO:__self_setattr__", &name, &value)) + return NULL; +#endif + + if (PyObject_GenericSetAttr((PyObject *)self, name, value) != 0) { + return NULL; + } + + Py_INCREF(Py_None); + return Py_None; +} + +/* ------------------------------------------------------------------------- */ + static PyObject *WraptObjectProxy_dir( WraptObjectProxyObject *self, PyObject *args) { @@ -1464,6 +1488,19 @@ static PyObject *WraptObjectProxy_get_class( /* ------------------------------------------------------------------------- */ +static int WraptObjectProxy_set_class(WraptObjectProxyObject *self, + PyObject *value) +{ + if (!self->wrapped) { + PyErr_SetString(PyExc_ValueError, "wrapper has not been initialized"); + return -1; + } + + return PyObject_SetAttrString(self->wrapped, "__class__", value); +} + +/* ------------------------------------------------------------------------- */ + static PyObject *WraptObjectProxy_get_annotations( WraptObjectProxyObject *self) { @@ -1535,6 +1572,9 @@ static PyObject *WraptObjectProxy_getattro( if (object) return object; + if (!PyErr_ExceptionMatches(PyExc_AttributeError)) + return NULL; + PyErr_Clear(); if (!getattr_str) { @@ -1738,6 +1778,8 @@ static PyMappingMethods WraptObjectProxy_as_mapping = { }; static PyMethodDef WraptObjectProxy_methods[] = { + { "__self_setattr__", (PyCFunction)WraptObjectProxy_self_setattr, + METH_VARARGS , 0 }, { "__dir__", (PyCFunction)WraptObjectProxy_dir, METH_NOARGS, 0 }, { "__enter__", (PyCFunction)WraptObjectProxy_enter, METH_VARARGS | METH_KEYWORDS, 0 }, @@ -1776,7 +1818,7 @@ static PyGetSetDef WraptObjectProxy_getset[] = { { "__doc__", (getter)WraptObjectProxy_get_doc, (setter)WraptObjectProxy_set_doc, 0 }, { "__class__", (getter)WraptObjectProxy_get_class, - NULL, 0 }, + (setter)WraptObjectProxy_set_class, 0 }, { "__annotations__", (getter)WraptObjectProxy_get_annotations, (setter)WraptObjectProxy_set_annotations, 0 }, { "__wrapped__", (getter)WraptObjectProxy_get_wrapped, @@ -2547,7 +2589,6 @@ static PyObject *WraptFunctionWrapperBase_set_name( static PyObject *WraptFunctionWrapperBase_instancecheck( WraptFunctionWrapperObject *self, PyObject *instance) { - PyObject *object = NULL; PyObject *result = NULL; int check = 0; diff --git a/newrelic/packages/wrapt/decorators.py b/newrelic/packages/wrapt/decorators.py index c3f2547295..c80a4bb72e 100644 --- a/newrelic/packages/wrapt/decorators.py +++ b/newrelic/packages/wrapt/decorators.py @@ -41,7 +41,7 @@ def exec_(_code_, _globs_=None, _locs_=None): except ImportError: pass -from .wrappers import (FunctionWrapper, BoundFunctionWrapper, ObjectProxy, +from .__wrapt__ import (FunctionWrapper, BoundFunctionWrapper, ObjectProxy, CallableObjectProxy) # Adapter wrapper for the wrapped function which will overlay certain diff --git a/newrelic/packages/wrapt/importer.py b/newrelic/packages/wrapt/importer.py index 5c4d4cc663..23fcbd2f63 100644 --- a/newrelic/packages/wrapt/importer.py +++ b/newrelic/packages/wrapt/importer.py @@ -15,7 +15,7 @@ string_types = str, from importlib.util import find_spec -from .decorators import synchronized +from .__wrapt__ import ObjectProxy # The dictionary registering any post import hooks to be triggered once # the target module has been imported. Once a module has been imported @@ -45,7 +45,6 @@ def import_hook(module): return callback(module) return import_hook -@synchronized(_post_import_hooks_lock) def register_post_import_hook(hook, name): # Create a deferred import hook if hook is a string name rather than # a callable function. @@ -53,51 +52,32 @@ def register_post_import_hook(hook, name): if isinstance(hook, string_types): hook = _create_import_hook_from_string(hook) - # Automatically install the import hook finder if it has not already - # been installed. + with _post_import_hooks_lock: + # Automatically install the import hook finder if it has not already + # been installed. - global _post_import_hooks_init + global _post_import_hooks_init - if not _post_import_hooks_init: - _post_import_hooks_init = True - sys.meta_path.insert(0, ImportHookFinder()) + if not _post_import_hooks_init: + _post_import_hooks_init = True + sys.meta_path.insert(0, ImportHookFinder()) - # Determine if any prior registration of a post import hook for - # the target modules has occurred and act appropriately. - - hooks = _post_import_hooks.get(name, None) - - if hooks is None: - # No prior registration of post import hooks for the target - # module. We need to check whether the module has already been - # imported. If it has we fire the hook immediately and add an - # empty list to the registry to indicate that the module has - # already been imported and hooks have fired. Otherwise add - # the post import hook to the registry. + # Check if the module is already imported. If not, register the hook + # to be called after import. module = sys.modules.get(name, None) - if module is not None: - _post_import_hooks[name] = [] - hook(module) - - else: - _post_import_hooks[name] = [hook] + if module is None: + _post_import_hooks.setdefault(name, []).append(hook) - elif hooks == []: - # A prior registration of port import hooks for the target - # module was done and the hooks already fired. Fire the hook - # immediately. + # If the module is already imported, we fire the hook right away. Note that + # the hook is called outside of the lock to avoid deadlocks if code run as a + # consequence of calling the module import hook in turn triggers a separate + # thread which tries to register an import hook. - module = sys.modules[name] + if module is not None: hook(module) - else: - # A prior registration of port import hooks for the target - # module was done but the module has not yet been imported. - - _post_import_hooks[name].append(hook) - # Register post import hooks defined as package entry points. def _create_import_hook_from_entrypoint(entrypoint): @@ -124,16 +104,18 @@ def discover_post_import_hooks(group): # exception is raised in any of the post import hooks, that will cause # the import of the target module to fail. -@synchronized(_post_import_hooks_lock) def notify_module_loaded(module): name = getattr(module, '__name__', None) - hooks = _post_import_hooks.get(name, None) - if hooks: - _post_import_hooks[name] = [] + with _post_import_hooks_lock: + hooks = _post_import_hooks.pop(name, ()) - for hook in hooks: - hook(module) + # Note that the hook is called outside of the lock to avoid deadlocks if + # code run as a consequence of calling the module import hook in turn + # triggers a separate thread which tries to register an import hook. + + for hook in hooks: + hook(module) # A custom module import finder. This intercepts attempts to import # modules and watches out for attempts to import target modules of @@ -148,20 +130,45 @@ def load_module(self, fullname): return module -class _ImportHookChainedLoader: +class _ImportHookChainedLoader(ObjectProxy): def __init__(self, loader): - self.loader = loader + super(_ImportHookChainedLoader, self).__init__(loader) if hasattr(loader, "load_module"): - self.load_module = self._load_module + self.__self_setattr__('load_module', self._self_load_module) if hasattr(loader, "create_module"): - self.create_module = self._create_module + self.__self_setattr__('create_module', self._self_create_module) if hasattr(loader, "exec_module"): - self.exec_module = self._exec_module - - def _load_module(self, fullname): - module = self.loader.load_module(fullname) + self.__self_setattr__('exec_module', self._self_exec_module) + + def _self_set_loader(self, module): + # Set module's loader to self.__wrapped__ unless it's already set to + # something else. Import machinery will set it to spec.loader if it is + # None, so handle None as well. The module may not support attribute + # assignment, in which case we simply skip it. Note that we also deal + # with __loader__ not existing at all. This is to future proof things + # due to proposal to remove the attribue as described in the GitHub + # issue at https://github.com/python/cpython/issues/77458. Also prior + # to Python 3.3, the __loader__ attribute was only set if a custom + # module loader was used. It isn't clear whether the attribute still + # existed in that case or was set to None. + + class UNDEFINED: pass + + if getattr(module, "__loader__", UNDEFINED) in (None, self): + try: + module.__loader__ = self.__wrapped__ + except AttributeError: + pass + + if (getattr(module, "__spec__", None) is not None + and getattr(module.__spec__, "loader", None) is self): + module.__spec__.loader = self.__wrapped__ + + def _self_load_module(self, fullname): + module = self.__wrapped__.load_module(fullname) + self._self_set_loader(module) notify_module_loaded(module) return module @@ -169,11 +176,12 @@ def _load_module(self, fullname): # Python 3.4 introduced create_module() and exec_module() instead of # load_module() alone. Splitting the two steps. - def _create_module(self, spec): - return self.loader.create_module(spec) + def _self_create_module(self, spec): + return self.__wrapped__.create_module(spec) - def _exec_module(self, module): - self.loader.exec_module(module) + def _self_exec_module(self, module): + self._self_set_loader(module) + self.__wrapped__.exec_module(module) notify_module_loaded(module) class ImportHookFinder: @@ -181,14 +189,14 @@ class ImportHookFinder: def __init__(self): self.in_progress = {} - @synchronized(_post_import_hooks_lock) def find_module(self, fullname, path=None): # If the module being imported is not one we have registered # post import hooks for, we can return immediately. We will # take no further part in the importing of this module. - if not fullname in _post_import_hooks: - return None + with _post_import_hooks_lock: + if fullname not in _post_import_hooks: + return None # When we are interested in a specific module, we will call back # into the import system a second time to defer to the import @@ -244,8 +252,9 @@ def find_spec(self, fullname, path=None, target=None): # post import hooks for, we can return immediately. We will # take no further part in the importing of this module. - if not fullname in _post_import_hooks: - return None + with _post_import_hooks_lock: + if fullname not in _post_import_hooks: + return None # When we are interested in a specific module, we will call back # into the import system a second time to defer to the import diff --git a/newrelic/packages/wrapt/patches.py b/newrelic/packages/wrapt/patches.py new file mode 100644 index 0000000000..e22adf7ca8 --- /dev/null +++ b/newrelic/packages/wrapt/patches.py @@ -0,0 +1,141 @@ +import inspect +import sys + +PY2 = sys.version_info[0] == 2 + +if PY2: + string_types = basestring, +else: + string_types = str, + +from .__wrapt__ import FunctionWrapper + +# Helper functions for applying wrappers to existing functions. + +def resolve_path(module, name): + if isinstance(module, string_types): + __import__(module) + module = sys.modules[module] + + parent = module + + path = name.split('.') + attribute = path[0] + + # We can't just always use getattr() because in doing + # that on a class it will cause binding to occur which + # will complicate things later and cause some things not + # to work. For the case of a class we therefore access + # the __dict__ directly. To cope though with the wrong + # class being given to us, or a method being moved into + # a base class, we need to walk the class hierarchy to + # work out exactly which __dict__ the method was defined + # in, as accessing it from __dict__ will fail if it was + # not actually on the class given. Fallback to using + # getattr() if we can't find it. If it truly doesn't + # exist, then that will fail. + + def lookup_attribute(parent, attribute): + if inspect.isclass(parent): + for cls in inspect.getmro(parent): + if attribute in vars(cls): + return vars(cls)[attribute] + else: + return getattr(parent, attribute) + else: + return getattr(parent, attribute) + + original = lookup_attribute(parent, attribute) + + for attribute in path[1:]: + parent = original + original = lookup_attribute(parent, attribute) + + return (parent, attribute, original) + +def apply_patch(parent, attribute, replacement): + setattr(parent, attribute, replacement) + +def wrap_object(module, name, factory, args=(), kwargs={}): + (parent, attribute, original) = resolve_path(module, name) + wrapper = factory(original, *args, **kwargs) + apply_patch(parent, attribute, wrapper) + return wrapper + +# Function for applying a proxy object to an attribute of a class +# instance. The wrapper works by defining an attribute of the same name +# on the class which is a descriptor and which intercepts access to the +# instance attribute. Note that this cannot be used on attributes which +# are themselves defined by a property object. + +class AttributeWrapper(object): + + def __init__(self, attribute, factory, args, kwargs): + self.attribute = attribute + self.factory = factory + self.args = args + self.kwargs = kwargs + + def __get__(self, instance, owner): + value = instance.__dict__[self.attribute] + return self.factory(value, *self.args, **self.kwargs) + + def __set__(self, instance, value): + instance.__dict__[self.attribute] = value + + def __delete__(self, instance): + del instance.__dict__[self.attribute] + +def wrap_object_attribute(module, name, factory, args=(), kwargs={}): + path, attribute = name.rsplit('.', 1) + parent = resolve_path(module, path)[2] + wrapper = AttributeWrapper(attribute, factory, args, kwargs) + apply_patch(parent, attribute, wrapper) + return wrapper + +# Functions for creating a simple decorator using a FunctionWrapper, +# plus short cut functions for applying wrappers to functions. These are +# for use when doing monkey patching. For a more featured way of +# creating decorators see the decorator decorator instead. + +def function_wrapper(wrapper): + def _wrapper(wrapped, instance, args, kwargs): + target_wrapped = args[0] + if instance is None: + target_wrapper = wrapper + elif inspect.isclass(instance): + target_wrapper = wrapper.__get__(None, instance) + else: + target_wrapper = wrapper.__get__(instance, type(instance)) + return FunctionWrapper(target_wrapped, target_wrapper) + return FunctionWrapper(wrapper, _wrapper) + +def wrap_function_wrapper(module, name, wrapper): + return wrap_object(module, name, FunctionWrapper, (wrapper,)) + +def patch_function_wrapper(module, name, enabled=None): + def _wrapper(wrapper): + return wrap_object(module, name, FunctionWrapper, (wrapper, enabled)) + return _wrapper + +def transient_function_wrapper(module, name): + def _decorator(wrapper): + def _wrapper(wrapped, instance, args, kwargs): + target_wrapped = args[0] + if instance is None: + target_wrapper = wrapper + elif inspect.isclass(instance): + target_wrapper = wrapper.__get__(None, instance) + else: + target_wrapper = wrapper.__get__(instance, type(instance)) + def _execute(wrapped, instance, args, kwargs): + (parent, attribute, original) = resolve_path(module, name) + replacement = FunctionWrapper(original, target_wrapper) + setattr(parent, attribute, replacement) + try: + return wrapped(*args, **kwargs) + finally: + setattr(parent, attribute, original) + return FunctionWrapper(target_wrapped, _execute) + return FunctionWrapper(wrapper, _wrapper) + return _decorator diff --git a/newrelic/packages/wrapt/weakrefs.py b/newrelic/packages/wrapt/weakrefs.py new file mode 100644 index 0000000000..f931b60d5f --- /dev/null +++ b/newrelic/packages/wrapt/weakrefs.py @@ -0,0 +1,98 @@ +import functools +import weakref + +from .__wrapt__ import ObjectProxy, _FunctionWrapperBase + +# A weak function proxy. This will work on instance methods, class +# methods, static methods and regular functions. Special treatment is +# needed for the method types because the bound method is effectively a +# transient object and applying a weak reference to one will immediately +# result in it being destroyed and the weakref callback called. The weak +# reference is therefore applied to the instance the method is bound to +# and the original function. The function is then rebound at the point +# of a call via the weak function proxy. + +def _weak_function_proxy_callback(ref, proxy, callback): + if proxy._self_expired: + return + + proxy._self_expired = True + + # This could raise an exception. We let it propagate back and let + # the weakref.proxy() deal with it, at which point it generally + # prints out a short error message direct to stderr and keeps going. + + if callback is not None: + callback(proxy) + +class WeakFunctionProxy(ObjectProxy): + + __slots__ = ('_self_expired', '_self_instance') + + def __init__(self, wrapped, callback=None): + # We need to determine if the wrapped function is actually a + # bound method. In the case of a bound method, we need to keep a + # reference to the original unbound function and the instance. + # This is necessary because if we hold a reference to the bound + # function, it will be the only reference and given it is a + # temporary object, it will almost immediately expire and + # the weakref callback triggered. So what is done is that we + # hold a reference to the instance and unbound function and + # when called bind the function to the instance once again and + # then call it. Note that we avoid using a nested function for + # the callback here so as not to cause any odd reference cycles. + + _callback = callback and functools.partial( + _weak_function_proxy_callback, proxy=self, + callback=callback) + + self._self_expired = False + + if isinstance(wrapped, _FunctionWrapperBase): + self._self_instance = weakref.ref(wrapped._self_instance, + _callback) + + if wrapped._self_parent is not None: + super(WeakFunctionProxy, self).__init__( + weakref.proxy(wrapped._self_parent, _callback)) + + else: + super(WeakFunctionProxy, self).__init__( + weakref.proxy(wrapped, _callback)) + + return + + try: + self._self_instance = weakref.ref(wrapped.__self__, _callback) + + super(WeakFunctionProxy, self).__init__( + weakref.proxy(wrapped.__func__, _callback)) + + except AttributeError: + self._self_instance = None + + super(WeakFunctionProxy, self).__init__( + weakref.proxy(wrapped, _callback)) + + def __call__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + + # We perform a boolean check here on the instance and wrapped + # function as that will trigger the reference error prior to + # calling if the reference had expired. + + instance = self._self_instance and self._self_instance() + function = self.__wrapped__ and self.__wrapped__ + + # If the wrapped function was originally a bound function, for + # which we retained a reference to the instance and the unbound + # function we need to rebind the function and then call it. If + # not just called the wrapped function. + + if instance is None: + return self.__wrapped__(*args, **kwargs) + + return function.__get__(instance, type(instance))(*args, **kwargs) diff --git a/newrelic/packages/wrapt/wrappers.py b/newrelic/packages/wrapt/wrappers.py index 2716cd1da1..dfc3440db4 100644 --- a/newrelic/packages/wrapt/wrappers.py +++ b/newrelic/packages/wrapt/wrappers.py @@ -1,8 +1,5 @@ -import os import sys -import functools import operator -import weakref import inspect PY2 = sys.version_info[0] == 2 @@ -94,6 +91,9 @@ def __init__(self, wrapped): except AttributeError: pass + def __self_setattr__(self, name, value): + object.__setattr__(self, name, value) + @property def __name__(self): return self.__wrapped__.__name__ @@ -445,12 +445,22 @@ def __reduce_ex__(self, protocol): class CallableObjectProxy(ObjectProxy): - def __call__(self, *args, **kwargs): + def __call__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + return self.__wrapped__(*args, **kwargs) class PartialCallableObjectProxy(ObjectProxy): - def __init__(self, *args, **kwargs): + def __init__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + if len(args) < 1: raise TypeError('partial type takes at least one argument') @@ -464,7 +474,12 @@ def __init__(self, *args, **kwargs): self._self_args = args self._self_kwargs = kwargs - def __call__(self, *args, **kwargs): + def __call__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + _args = self._self_args + args _kwargs = dict(self._self_kwargs) @@ -544,7 +559,12 @@ def __get__(self, instance, owner): return self - def __call__(self, *args, **kwargs): + def __call__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + # If enabled has been specified, then evaluate it at this point # and if the wrapper is not to be executed, then simply return # the bound function rather than a bound wrapper for the bound @@ -607,7 +627,12 @@ def __subclasscheck__(self, subclass): class BoundFunctionWrapper(_FunctionWrapperBase): - def __call__(self, *args, **kwargs): + def __call__(*args, **kwargs): + def _unpack_self(self, *args): + return self, args + + self, args = _unpack_self(*args) + # If enabled has been specified, then evaluate it at this point # and if the wrapper is not to be executed, then simply return # the bound function rather than a bound wrapper for the bound @@ -757,230 +782,3 @@ def __init__(self, wrapped, wrapper, enabled=None): super(FunctionWrapper, self).__init__(wrapped, None, wrapper, enabled, binding) - -try: - if not os.environ.get('WRAPT_DISABLE_EXTENSIONS'): - from ._wrappers import (ObjectProxy, CallableObjectProxy, - PartialCallableObjectProxy, FunctionWrapper, - BoundFunctionWrapper, _FunctionWrapperBase) -except ImportError: - pass - -# Helper functions for applying wrappers to existing functions. - -def resolve_path(module, name): - if isinstance(module, string_types): - __import__(module) - module = sys.modules[module] - - parent = module - - path = name.split('.') - attribute = path[0] - - # We can't just always use getattr() because in doing - # that on a class it will cause binding to occur which - # will complicate things later and cause some things not - # to work. For the case of a class we therefore access - # the __dict__ directly. To cope though with the wrong - # class being given to us, or a method being moved into - # a base class, we need to walk the class hierarchy to - # work out exactly which __dict__ the method was defined - # in, as accessing it from __dict__ will fail if it was - # not actually on the class given. Fallback to using - # getattr() if we can't find it. If it truly doesn't - # exist, then that will fail. - - def lookup_attribute(parent, attribute): - if inspect.isclass(parent): - for cls in inspect.getmro(parent): - if attribute in vars(cls): - return vars(cls)[attribute] - else: - return getattr(parent, attribute) - else: - return getattr(parent, attribute) - - original = lookup_attribute(parent, attribute) - - for attribute in path[1:]: - parent = original - original = lookup_attribute(parent, attribute) - - return (parent, attribute, original) - -def apply_patch(parent, attribute, replacement): - setattr(parent, attribute, replacement) - -def wrap_object(module, name, factory, args=(), kwargs={}): - (parent, attribute, original) = resolve_path(module, name) - wrapper = factory(original, *args, **kwargs) - apply_patch(parent, attribute, wrapper) - return wrapper - -# Function for applying a proxy object to an attribute of a class -# instance. The wrapper works by defining an attribute of the same name -# on the class which is a descriptor and which intercepts access to the -# instance attribute. Note that this cannot be used on attributes which -# are themselves defined by a property object. - -class AttributeWrapper(object): - - def __init__(self, attribute, factory, args, kwargs): - self.attribute = attribute - self.factory = factory - self.args = args - self.kwargs = kwargs - - def __get__(self, instance, owner): - value = instance.__dict__[self.attribute] - return self.factory(value, *self.args, **self.kwargs) - - def __set__(self, instance, value): - instance.__dict__[self.attribute] = value - - def __delete__(self, instance): - del instance.__dict__[self.attribute] - -def wrap_object_attribute(module, name, factory, args=(), kwargs={}): - path, attribute = name.rsplit('.', 1) - parent = resolve_path(module, path)[2] - wrapper = AttributeWrapper(attribute, factory, args, kwargs) - apply_patch(parent, attribute, wrapper) - return wrapper - -# Functions for creating a simple decorator using a FunctionWrapper, -# plus short cut functions for applying wrappers to functions. These are -# for use when doing monkey patching. For a more featured way of -# creating decorators see the decorator decorator instead. - -def function_wrapper(wrapper): - def _wrapper(wrapped, instance, args, kwargs): - target_wrapped = args[0] - if instance is None: - target_wrapper = wrapper - elif inspect.isclass(instance): - target_wrapper = wrapper.__get__(None, instance) - else: - target_wrapper = wrapper.__get__(instance, type(instance)) - return FunctionWrapper(target_wrapped, target_wrapper) - return FunctionWrapper(wrapper, _wrapper) - -def wrap_function_wrapper(module, name, wrapper): - return wrap_object(module, name, FunctionWrapper, (wrapper,)) - -def patch_function_wrapper(module, name): - def _wrapper(wrapper): - return wrap_object(module, name, FunctionWrapper, (wrapper,)) - return _wrapper - -def transient_function_wrapper(module, name): - def _decorator(wrapper): - def _wrapper(wrapped, instance, args, kwargs): - target_wrapped = args[0] - if instance is None: - target_wrapper = wrapper - elif inspect.isclass(instance): - target_wrapper = wrapper.__get__(None, instance) - else: - target_wrapper = wrapper.__get__(instance, type(instance)) - def _execute(wrapped, instance, args, kwargs): - (parent, attribute, original) = resolve_path(module, name) - replacement = FunctionWrapper(original, target_wrapper) - setattr(parent, attribute, replacement) - try: - return wrapped(*args, **kwargs) - finally: - setattr(parent, attribute, original) - return FunctionWrapper(target_wrapped, _execute) - return FunctionWrapper(wrapper, _wrapper) - return _decorator - -# A weak function proxy. This will work on instance methods, class -# methods, static methods and regular functions. Special treatment is -# needed for the method types because the bound method is effectively a -# transient object and applying a weak reference to one will immediately -# result in it being destroyed and the weakref callback called. The weak -# reference is therefore applied to the instance the method is bound to -# and the original function. The function is then rebound at the point -# of a call via the weak function proxy. - -def _weak_function_proxy_callback(ref, proxy, callback): - if proxy._self_expired: - return - - proxy._self_expired = True - - # This could raise an exception. We let it propagate back and let - # the weakref.proxy() deal with it, at which point it generally - # prints out a short error message direct to stderr and keeps going. - - if callback is not None: - callback(proxy) - -class WeakFunctionProxy(ObjectProxy): - - __slots__ = ('_self_expired', '_self_instance') - - def __init__(self, wrapped, callback=None): - # We need to determine if the wrapped function is actually a - # bound method. In the case of a bound method, we need to keep a - # reference to the original unbound function and the instance. - # This is necessary because if we hold a reference to the bound - # function, it will be the only reference and given it is a - # temporary object, it will almost immediately expire and - # the weakref callback triggered. So what is done is that we - # hold a reference to the instance and unbound function and - # when called bind the function to the instance once again and - # then call it. Note that we avoid using a nested function for - # the callback here so as not to cause any odd reference cycles. - - _callback = callback and functools.partial( - _weak_function_proxy_callback, proxy=self, - callback=callback) - - self._self_expired = False - - if isinstance(wrapped, _FunctionWrapperBase): - self._self_instance = weakref.ref(wrapped._self_instance, - _callback) - - if wrapped._self_parent is not None: - super(WeakFunctionProxy, self).__init__( - weakref.proxy(wrapped._self_parent, _callback)) - - else: - super(WeakFunctionProxy, self).__init__( - weakref.proxy(wrapped, _callback)) - - return - - try: - self._self_instance = weakref.ref(wrapped.__self__, _callback) - - super(WeakFunctionProxy, self).__init__( - weakref.proxy(wrapped.__func__, _callback)) - - except AttributeError: - self._self_instance = None - - super(WeakFunctionProxy, self).__init__( - weakref.proxy(wrapped, _callback)) - - def __call__(self, *args, **kwargs): - # We perform a boolean check here on the instance and wrapped - # function as that will trigger the reference error prior to - # calling if the reference had expired. - - instance = self._self_instance and self._self_instance() - function = self.__wrapped__ and self.__wrapped__ - - # If the wrapped function was originally a bound function, for - # which we retained a reference to the instance and the unbound - # function we need to rebind the function and then call it. If - # not just called the wrapped function. - - if instance is None: - return self.__wrapped__(*args, **kwargs) - - return function.__get__(instance, type(instance))(*args, **kwargs) diff --git a/setup.cfg b/setup.cfg index 006265c364..8a41f1534d 100644 --- a/setup.cfg +++ b/setup.cfg @@ -5,4 +5,4 @@ license_files = [flake8] max-line-length = 120 -extend-ignore = E122,E126,E127,E128,E203,E501,E722,F841,W504,E731 +extend-ignore = E122,E126,E127,E128,E203,E501,E722,F841,W504,E731,F811 diff --git a/setup.py b/setup.py index ed8dbfb844..3a92d06a45 100644 --- a/setup.py +++ b/setup.py @@ -124,6 +124,7 @@ def build_extension(self, ext): "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: System :: Monitoring", diff --git a/tests/adapter_hypercorn/test_hypercorn.py b/tests/adapter_hypercorn/test_hypercorn.py index 8b53eee0ac..262f7a0317 100644 --- a/tests/adapter_hypercorn/test_hypercorn.py +++ b/tests/adapter_hypercorn/test_hypercorn.py @@ -17,7 +17,6 @@ import time from urllib.request import HTTPError, urlopen -import pkg_resources import pytest from testing_support.fixtures import ( override_application_settings, @@ -39,8 +38,12 @@ from newrelic.api.transaction import ignore_transaction from newrelic.common.object_names import callable_name +from newrelic.common.package_version_utils import ( + get_package_version, + get_package_version_tuple, +) -HYPERCORN_VERSION = tuple(int(v) for v in pkg_resources.get_distribution("hypercorn").version.split(".")) +HYPERCORN_VERSION = get_package_version_tuple("hypercorn") asgi_2_unsupported = HYPERCORN_VERSION >= (0, 14, 1) wsgi_unsupported = HYPERCORN_VERSION < (0, 14, 1) @@ -60,6 +63,7 @@ def wsgi_app(environ, start_response): @pytest.fixture( + scope="session", params=( pytest.param( simple_app_v2_raw, @@ -78,7 +82,7 @@ def app(request): return request.param -@pytest.fixture() +@pytest.fixture(scope="session") def port(loop, app): import hypercorn.asyncio import hypercorn.config @@ -132,7 +136,7 @@ def wait_for_port(port, retries=10): @override_application_settings({"transaction_name.naming_scheme": "framework"}) def test_hypercorn_200(port, app): - hypercorn_version = pkg_resources.get_distribution("hypercorn").version + hypercorn_version = get_package_version("hypercorn") @validate_transaction_metrics( callable_name(app), diff --git a/tests/agent_features/test_asgi_browser.py b/tests/agent_features/test_asgi_browser.py index 281d08b967..4146d507b6 100644 --- a/tests/agent_features/test_asgi_browser.py +++ b/tests/agent_features/test_asgi_browser.py @@ -31,7 +31,6 @@ from newrelic.api.transaction import ( add_custom_attribute, disable_browser_autorum, - get_browser_timing_footer, get_browser_timing_header, ) from newrelic.common.encoding_utils import deobfuscate @@ -41,9 +40,9 @@ @asgi_application() async def target_asgi_application_manual_rum(scope, receive, send): - text = "%s

RESPONSE

%s" + text = "%s

RESPONSE

" - output = (text % (get_browser_timing_header(), get_browser_timing_footer())).encode("UTF-8") + output = (text % get_browser_timing_header()).encode("UTF-8") response_headers = [ (b"content-type", b"text/html; charset=utf-8"), @@ -56,15 +55,15 @@ async def target_asgi_application_manual_rum(scope, receive, send): target_application_manual_rum = AsgiTest(target_asgi_application_manual_rum) -_test_footer_attributes = { +_test_header_attributes = { "browser_monitoring.enabled": True, "browser_monitoring.auto_instrument": False, "js_agent_loader": "", } -@override_application_settings(_test_footer_attributes) -def test_footer_attributes(): +@override_application_settings(_test_header_attributes) +def test_header_attributes(): settings = application_settings() assert settings.browser_monitoring.enabled @@ -84,7 +83,6 @@ def test_footer_attributes(): html = BeautifulSoup(response.body, "html.parser") header = html.html.head.script.string content = html.html.body.p.string - footer = html.html.body.script.string # Validate actual body content. @@ -94,10 +92,10 @@ def test_footer_attributes(): assert header.find("NREUM HEADER") != -1 - # Now validate the various fields of the footer. The fields are + # Now validate the various fields of the header. The fields are # held by a JSON dictionary. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["licenseKey"] == settings.browser_key assert data["applicationID"] == settings.application_id @@ -137,8 +135,8 @@ def test_ssl_for_http_is_none(): response = target_application_manual_rum.get("/") html = BeautifulSoup(response.body, "html.parser") - footer = html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "sslForHttp" not in data @@ -159,8 +157,8 @@ def test_ssl_for_http_is_true(): response = target_application_manual_rum.get("/") html = BeautifulSoup(response.body, "html.parser") - footer = html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["sslForHttp"] is True @@ -181,8 +179,8 @@ def test_ssl_for_http_is_false(): response = target_application_manual_rum.get("/") html = BeautifulSoup(response.body, "html.parser") - footer = html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["sslForHttp"] is False @@ -219,7 +217,7 @@ def test_html_insertion_yield_single_no_head(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" in response.body assert b"NREUM.info" in response.body @@ -259,7 +257,7 @@ def test_html_insertion_yield_multi_no_head(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" in response.body assert b"NREUM.info" in response.body @@ -299,7 +297,7 @@ def test_html_insertion_unnamed_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -339,7 +337,7 @@ def test_html_insertion_named_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -379,7 +377,7 @@ def test_html_insertion_inline_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" in response.body assert b"NREUM.info" in response.body @@ -414,7 +412,7 @@ def test_html_insertion_empty(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -449,7 +447,7 @@ def test_html_insertion_single_empty_string(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -485,7 +483,7 @@ def test_html_insertion_multiple_empty_string(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -522,7 +520,7 @@ def test_html_insertion_single_large_prelude(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert "content-type" in response.headers assert "content-length" in response.headers @@ -566,7 +564,7 @@ def test_html_insertion_multi_large_prelude(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert "content-type" in response.headers assert "content-length" in response.headers @@ -884,7 +882,7 @@ def test_html_insertion_disable_autorum_via_api(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body @@ -895,13 +893,9 @@ async def target_asgi_application_manual_rum_insertion(scope, receive, send): output = b"

RESPONSE

" header = get_browser_timing_header() - footer = get_browser_timing_footer() - header = get_browser_timing_header() - footer = get_browser_timing_footer() assert header == "" - assert footer == "" response_headers = [ (b"content-type", b"text/html; charset=utf-8"), @@ -931,7 +925,7 @@ def test_html_insertion_manual_rum_insertion(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert b"NREUM HEADER" not in response.body assert b"NREUM.info" not in response.body diff --git a/tests/agent_features/test_browser.py b/tests/agent_features/test_browser.py index e0f562d1e8..84ce795000 100644 --- a/tests/agent_features/test_browser.py +++ b/tests/agent_features/test_browser.py @@ -13,6 +13,7 @@ # limitations under the License. import json +import re import sys import six @@ -29,9 +30,9 @@ from newrelic.api.transaction import ( add_custom_attribute, disable_browser_autorum, - get_browser_timing_footer, get_browser_timing_header, ) +from newrelic.api.web_transaction import web_transaction from newrelic.api.wsgi_application import wsgi_application from newrelic.common.encoding_utils import deobfuscate @@ -42,9 +43,9 @@ def target_wsgi_application_manual_rum(environ, start_response): status = "200 OK" - text = "%s

RESPONSE

%s" + text = "%s

RESPONSE

" - output = (text % (get_browser_timing_header(), get_browser_timing_footer())).encode("UTF-8") + output = (text % get_browser_timing_header()).encode("UTF-8") response_headers = [("Content-Type", "text/html; charset=utf-8"), ("Content-Length", str(len(output)))] start_response(status, response_headers) @@ -54,15 +55,15 @@ def target_wsgi_application_manual_rum(environ, start_response): target_application_manual_rum = webtest.TestApp(target_wsgi_application_manual_rum) -_test_footer_attributes = { +_test_header_attributes = { "browser_monitoring.enabled": True, "browser_monitoring.auto_instrument": False, "js_agent_loader": "", } -@override_application_settings(_test_footer_attributes) -def test_footer_attributes(): +@override_application_settings(_test_header_attributes) +def test_header_attributes(): settings = application_settings() assert settings.browser_monitoring.enabled @@ -81,7 +82,6 @@ def test_footer_attributes(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -91,10 +91,10 @@ def test_footer_attributes(): assert header.find("NREUM HEADER") != -1 - # Now validate the various fields of the footer. The fields are + # Now validate the various fields of the header. The fields are # held by a JSON dictionary. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["licenseKey"] == settings.browser_key assert data["applicationID"] == settings.application_id @@ -133,8 +133,8 @@ def test_ssl_for_http_is_none(): assert settings.browser_monitoring.ssl_for_http is None response = target_application_manual_rum.get("/") - footer = response.html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = response.html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "sslForHttp" not in data @@ -154,8 +154,8 @@ def test_ssl_for_http_is_true(): assert settings.browser_monitoring.ssl_for_http is True response = target_application_manual_rum.get("/") - footer = response.html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = response.html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["sslForHttp"] is True @@ -175,8 +175,8 @@ def test_ssl_for_http_is_false(): assert settings.browser_monitoring.ssl_for_http is False response = target_application_manual_rum.get("/") - footer = response.html.html.body.script.string - data = json.loads(footer.split("NREUM.info=")[1]) + header = response.html.html.head.script.string + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert data["sslForHttp"] is False @@ -211,7 +211,7 @@ def test_html_insertion_yield_single_no_head(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain("NREUM HEADER", "NREUM.info") @@ -247,7 +247,7 @@ def test_html_insertion_yield_multi_no_head(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain("NREUM HEADER", "NREUM.info") @@ -287,7 +287,7 @@ def test_html_insertion_unnamed_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -327,7 +327,7 @@ def test_html_insertion_named_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -367,7 +367,7 @@ def test_html_insertion_inline_attachment_header(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain("NREUM HEADER", "NREUM.info") @@ -400,7 +400,7 @@ def test_html_insertion_empty_list(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -435,7 +435,7 @@ def test_html_insertion_single_empty_string(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -470,7 +470,7 @@ def test_html_insertion_multiple_empty_string(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -504,7 +504,7 @@ def test_html_insertion_single_large_prelude(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert "Content-Type" in response.headers assert "Content-Length" in response.headers @@ -543,7 +543,7 @@ def test_html_insertion_multi_large_prelude(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert "Content-Type" in response.headers assert "Content-Length" in response.headers @@ -588,7 +588,7 @@ def test_html_insertion_yield_before_start(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain("NREUM HEADER", "NREUM.info") @@ -626,7 +626,7 @@ def test_html_insertion_start_yield_start(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. assert "Content-Type" in response.headers assert "Content-Length" in response.headers @@ -979,7 +979,7 @@ def test_html_insertion_disable_autorum_via_api(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @@ -991,13 +991,9 @@ def target_wsgi_application_manual_rum_insertion(environ, start_response): output = b"

RESPONSE

" header = get_browser_timing_header() - footer = get_browser_timing_footer() - header = get_browser_timing_header() - footer = get_browser_timing_footer() assert header == "" - assert footer == "" response_headers = [("Content-Type", "text/html; charset=utf-8"), ("Content-Length", str(len(output)))] start_response(status, response_headers) @@ -1023,6 +1019,42 @@ def test_html_insertion_manual_rum_insertion(): # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) + + +_test_get_browser_timing_snippet_with_nonces = { + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": False, + "js_agent_loader": "", +} +_test_get_browser_timing_snippet_with_nonces_rum_info_re = re.compile(r"NREUM\.info={[^}]*}") + + +@override_application_settings(_test_get_browser_timing_snippet_with_nonces) +@web_transaction( + scheme="http", host="127.0.0.1", port=80, request_method="GET", request_path="/", query_string=None, headers={} +) +def test_get_browser_timing_snippet_with_nonces(): + header = get_browser_timing_header("NONCE") + + header = _test_get_browser_timing_snippet_with_nonces_rum_info_re.sub("NREUM.info={}", header) + assert ( + header + == '' + ) + + +@override_application_settings(_test_get_browser_timing_snippet_with_nonces) +@web_transaction( + scheme="http", host="127.0.0.1", port=80, request_method="GET", request_path="/", query_string=None, headers={} +) +def test_get_browser_timing_snippet_without_nonces(): + header = get_browser_timing_header() + + header = _test_get_browser_timing_snippet_with_nonces_rum_info_re.sub("NREUM.info={}", header) + assert ( + header + == '' + ) diff --git a/tests/agent_features/test_configuration.py b/tests/agent_features/test_configuration.py index 1a311e6930..f43b08495b 100644 --- a/tests/agent_features/test_configuration.py +++ b/tests/agent_features/test_configuration.py @@ -24,6 +24,8 @@ import logging +from testing_support.fixtures import override_generic_settings + from newrelic.api.exceptions import ConfigurationError from newrelic.common.object_names import callable_name from newrelic.config import ( @@ -595,6 +597,7 @@ def test_translate_deprecated_ignored_params_with_new_setting(): ("otlp_port", 0), ), ) +@override_generic_settings(global_settings(), {"host": "collector.newrelic.com"}) def test_default_values(name, expected_value): settings = global_settings() value = fetch_config_setting(settings, name) diff --git a/tests/agent_features/test_error_events.py b/tests/agent_features/test_error_events.py index 72bdb14f7c..039376477c 100644 --- a/tests/agent_features/test_error_events.py +++ b/tests/agent_features/test_error_events.py @@ -16,20 +16,16 @@ import time import webtest - from testing_support.fixtures import ( cat_enabled, make_cross_agent_headers, - make_synthetics_header, + make_synthetics_headers, override_application_settings, reset_core_stats_engine, validate_error_event_sample_data, validate_transaction_error_event_count, ) from testing_support.sample_applications import fully_featured_app -from testing_support.validators.validate_error_trace_attributes import ( - validate_error_trace_attributes, -) from testing_support.validators.validate_non_transaction_error_event import ( validate_non_transaction_error_event, ) @@ -43,6 +39,9 @@ SYNTHETICS_RESOURCE_ID = "09845779-16ef-4fa7-b7f2-44da8e62931c" SYNTHETICS_JOB_ID = "8c7dd3ba-4933-4cbb-b1ed-b62f511782f4" SYNTHETICS_MONITOR_ID = "dc452ae9-1a93-4ab5-8a33-600521e9cd00" +SYNTHETICS_TYPE = "scheduled" +SYNTHETICS_INITIATOR = "graphql" +SYNTHETICS_ATTRIBUTES = {"exampleAttribute": "1"} ERR_MESSAGE = "Transaction had bad value" ERROR = ValueError(ERR_MESSAGE) @@ -135,6 +134,9 @@ def test_transaction_error_cross_agent(): "nr.syntheticsResourceId": SYNTHETICS_RESOURCE_ID, "nr.syntheticsJobId": SYNTHETICS_JOB_ID, "nr.syntheticsMonitorId": SYNTHETICS_MONITOR_ID, + "nr.syntheticsType": SYNTHETICS_TYPE, + "nr.syntheticsInitiator": SYNTHETICS_INITIATOR, + "nr.syntheticsExampleAttribute": "1", } @@ -144,12 +146,15 @@ def test_transaction_error_with_synthetics(): "err_message": ERR_MESSAGE, } settings = application_settings() - headers = make_synthetics_header( + headers = make_synthetics_headers( + settings.encoding_key, settings.trusted_account_ids[0], SYNTHETICS_RESOURCE_ID, SYNTHETICS_JOB_ID, SYNTHETICS_MONITOR_ID, - settings.encoding_key, + SYNTHETICS_TYPE, + SYNTHETICS_INITIATOR, + SYNTHETICS_ATTRIBUTES, ) response = fully_featured_application.get("/", headers=headers, extra_environ=test_environ) diff --git a/tests/agent_features/test_lambda_handler.py b/tests/agent_features/test_lambda_handler.py index 40b6944072..69b05fbf8d 100644 --- a/tests/agent_features/test_lambda_handler.py +++ b/tests/agent_features/test_lambda_handler.py @@ -100,6 +100,8 @@ class Context(object): memory_limit_in_mb = 128 +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @pytest.mark.parametrize("is_cold", (False, True)) def test_lambda_transaction_attributes(is_cold, monkeypatch): # setup copies of the attribute lists for this test only @@ -139,6 +141,8 @@ def _test(): _test() +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @validate_transaction_trace_attributes(_expected_attributes) @validate_transaction_event_attributes(_expected_attributes) @override_application_settings(_override_settings) @@ -193,6 +197,8 @@ def test_lambda_malformed_request_headers(): } +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @validate_transaction_trace_attributes(_malformed_response_attributes) @validate_transaction_event_attributes(_malformed_response_attributes) @override_application_settings(_override_settings) @@ -229,6 +235,8 @@ def handler(event, context): } +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @validate_transaction_trace_attributes(_no_status_code_response) @validate_transaction_event_attributes(_no_status_code_response) @override_application_settings(_override_settings) @@ -253,6 +261,8 @@ def handler(event, context): ) +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @pytest.mark.parametrize("event,arn", ((empty_event, None), (firehose_event, "arn:aws:kinesis:EXAMPLE"))) def test_lambda_event_source_arn_attribute(event, arn): if arn is None: @@ -285,6 +295,8 @@ def _test(): _test() +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @pytest.mark.parametrize( "api", ( diff --git a/tests/agent_features/test_log_events.py b/tests/agent_features/test_log_events.py index bb173d6c4e..fb9991a823 100644 --- a/tests/agent_features/test_log_events.py +++ b/tests/agent_features/test_log_events.py @@ -12,14 +12,48 @@ # See the License for the specific language governing permissions and # limitations under the License. -from newrelic.api.background_task import background_task -from newrelic.api.time_trace import current_trace -from newrelic.api.transaction import current_transaction, record_log_event, ignore_transaction -from testing_support.fixtures import override_application_settings, reset_core_stats_engine +import pytest +from testing_support.fixtures import ( + override_application_settings, + reset_core_stats_engine, +) from testing_support.validators.validate_log_event_count import validate_log_event_count -from testing_support.validators.validate_log_event_count_outside_transaction import validate_log_event_count_outside_transaction +from testing_support.validators.validate_log_event_count_outside_transaction import ( + validate_log_event_count_outside_transaction, +) from testing_support.validators.validate_log_events import validate_log_events -from testing_support.validators.validate_log_events_outside_transaction import validate_log_events_outside_transaction +from testing_support.validators.validate_log_events_outside_transaction import ( + validate_log_events_outside_transaction, +) + +from newrelic.api.background_task import background_task +from newrelic.api.time_trace import current_trace +from newrelic.api.transaction import ( + current_transaction, + ignore_transaction, + record_log_event, +) +from newrelic.core.config import _parse_attributes + + +class NonPrintableObject(object): + def __str__(self): + raise RuntimeError("Unable to print object.") + + __repr__ = __str__ + + +class NonSerializableObject(object): + def __str__(self): + return "<%s object>" % self.__class__.__name__ + + __repr__ = __str__ + + +def combine_dicts(defaults, overrides): + combined = defaults.copy() + combined.update(overrides) + return combined def set_trace_ids(): @@ -31,155 +65,333 @@ def set_trace_ids(): trace.guid = "abcdefgh" -def exercise_record_log_event(message="A"): +def exercise_record_log_event(): set_trace_ids() - record_log_event(message, "ERROR") - -enable_log_forwarding = override_application_settings({"application_logging.forwarding.enabled": True}) + record_log_event("no_other_arguments") + record_log_event("keyword_arguments", timestamp=1234, level="ERROR", attributes={"key": "value"}) + record_log_event("positional_arguments", "WARNING", 2345, {"key": "value"}) + record_log_event("serialized_attributes", attributes=_serialized_attributes) + record_log_event(None, attributes={"attributes_only": "value"}) + record_log_event({"attributes_only": "value"}) + record_log_event({"message": "dict_message"}) + record_log_event({"message": 123}) + + # Unsent due to message content missing + record_log_event("") + record_log_event(" ") + record_log_event(NonPrintableObject()) + record_log_event({"message": ""}) + record_log_event({"message": NonPrintableObject()}) + record_log_event({"filtered_attribute": "should_be_removed"}) + record_log_event(None) + + +enable_log_forwarding = override_application_settings( + { + "application_logging.forwarding.enabled": True, + "application_logging.forwarding.context_data.enabled": True, + "application_logging.forwarding.context_data.exclude": ["filtered_attribute"], + } +) disable_log_forwarding = override_application_settings({"application_logging.forwarding.enabled": False}) -_common_attributes_service_linking = {"timestamp": None, "hostname": None, "entity.name": "Python Agent Test (agent_features)", "entity.guid": None} +disable_log_attributes = override_application_settings( + {"application_logging.forwarding.enabled": True, "application_logging.forwarding.context_data.enabled": False} +) + +_common_attributes_service_linking = { + "timestamp": None, + "hostname": None, + "entity.name": "Python Agent Test (agent_features)", + "entity.guid": None, +} _common_attributes_trace_linking = {"span.id": "abcdefgh", "trace.id": "abcdefgh12345678"} _common_attributes_trace_linking.update(_common_attributes_service_linking) -_test_record_log_event_inside_transaction_events = [{"message": "A", "level": "ERROR"}] -_test_record_log_event_inside_transaction_events[0].update(_common_attributes_trace_linking) + +_serialized_attributes = { + "str_attr": "Value", + "bytes_attr": b"value", + "int_attr": 1, + "dict_attr": {"key": "value"}, + "non_serializable_attr": NonSerializableObject(), + "non_printable_attr": NonPrintableObject(), + "attr_value_too_long": "*" * 256, + "attr_name_too_long_" + ("*" * 237): "value", + "attr_name_with_prefix_too_long_" + ("*" * 220): "value", +} + +_exercise_record_log_event_events = [ + {"message": "no_other_arguments", "level": "UNKNOWN"}, + {"message": "keyword_arguments", "level": "ERROR", "timestamp": 1234, "context.key": "value"}, + {"message": "positional_arguments", "level": "WARNING", "timestamp": 2345, "context.key": "value"}, + { + "message": "serialized_attributes", + "context.str_attr": "Value", + "context.bytes_attr": b"value", + "context.int_attr": 1, + "context.dict_attr": "{'key': 'value'}", + "context.non_serializable_attr": "", + "context.attr_value_too_long": "*" * 255, + }, + {"context.attributes_only": "value"}, + {"message.attributes_only": "value"}, + {"message": "dict_message"}, + {"message": "123"}, +] +_exercise_record_log_event_inside_transaction_events = [ + combine_dicts(_common_attributes_trace_linking, log) for log in _exercise_record_log_event_events +] +_exercise_record_log_event_outside_transaction_events = [ + combine_dicts(_common_attributes_service_linking, log) for log in _exercise_record_log_event_events +] +_exercise_record_log_event_forgone_attrs = [ + "context.non_printable_attr", + "attr_name_too_long_", + "attr_name_with_prefix_too_long_", +] + + +# Test Log Forwarding + @enable_log_forwarding def test_record_log_event_inside_transaction(): - @validate_log_events(_test_record_log_event_inside_transaction_events) - @validate_log_event_count(1) + @validate_log_events( + _exercise_record_log_event_inside_transaction_events, forgone_attrs=_exercise_record_log_event_forgone_attrs + ) + @validate_log_event_count(len(_exercise_record_log_event_inside_transaction_events)) @background_task() def test(): exercise_record_log_event() - - test() + test() -_test_record_log_event_outside_transaction_events = [{"message": "A", "level": "ERROR"}] -_test_record_log_event_outside_transaction_events[0].update(_common_attributes_service_linking) @enable_log_forwarding @reset_core_stats_engine() def test_record_log_event_outside_transaction(): - @validate_log_events_outside_transaction(_test_record_log_event_outside_transaction_events) - @validate_log_event_count_outside_transaction(1) + @validate_log_events_outside_transaction( + _exercise_record_log_event_outside_transaction_events, forgone_attrs=_exercise_record_log_event_forgone_attrs + ) + @validate_log_event_count_outside_transaction(len(_exercise_record_log_event_outside_transaction_events)) def test(): exercise_record_log_event() test() -_test_record_log_event_unknown_level_inside_transaction_events = [{"message": "A", "level": "UNKNOWN"}] -_test_record_log_event_unknown_level_inside_transaction_events[0].update(_common_attributes_trace_linking) +@enable_log_forwarding +def test_ignored_transaction_logs_not_forwarded(): + @validate_log_event_count(0) + @background_task() + def test(): + ignore_transaction() + exercise_record_log_event() + + test() + + +# Test Message Truncation + +_test_log_event_truncation_events = [{"message": "A" * 32768}] + @enable_log_forwarding -def test_record_log_event_unknown_level_inside_transaction(): - @validate_log_events(_test_record_log_event_unknown_level_inside_transaction_events) +def test_log_event_truncation_inside_transaction(): + @validate_log_events(_test_log_event_truncation_events) @validate_log_event_count(1) @background_task() def test(): - set_trace_ids() - record_log_event("A") - - test() + record_log_event("A" * 33000) + test() -_test_record_log_event_unknown_level_outside_transaction_events = [{"message": "A", "level": "UNKNOWN"}] -_test_record_log_event_unknown_level_outside_transaction_events[0].update(_common_attributes_service_linking) @enable_log_forwarding @reset_core_stats_engine() -def test_record_log_event_unknown_level_outside_transaction(): - @validate_log_events_outside_transaction(_test_record_log_event_unknown_level_outside_transaction_events) +def test_log_event_truncation_outside_transaction(): + @validate_log_events_outside_transaction(_test_log_event_truncation_events) @validate_log_event_count_outside_transaction(1) def test(): - set_trace_ids() - record_log_event("A") + record_log_event("A" * 33000) test() -@enable_log_forwarding -def test_record_log_event_empty_message_inside_transaction(): +# Test Log Forwarding Settings + + +@disable_log_forwarding +def test_disabled_record_log_event_inside_transaction(): @validate_log_event_count(0) @background_task() def test(): - exercise_record_log_event("") - + exercise_record_log_event() + test() -@enable_log_forwarding +@disable_log_forwarding @reset_core_stats_engine() -def test_record_log_event_empty_message_outside_transaction(): +def test_disabled_record_log_event_outside_transaction(): @validate_log_event_count_outside_transaction(0) def test(): - exercise_record_log_event("") + exercise_record_log_event() test() -@enable_log_forwarding -def test_record_log_event_whitespace_inside_transaction(): - @validate_log_event_count(0) +# Test Log Attribute Settings + + +@disable_log_attributes +def test_attributes_disabled_inside_transaction(): + @validate_log_events([{"message": "A"}], forgone_attrs=["context.key"]) + @validate_log_event_count(1) @background_task() def test(): - exercise_record_log_event(" ") + record_log_event("A", attributes={"key": "value"}) test() -@enable_log_forwarding +@disable_log_attributes @reset_core_stats_engine() -def test_record_log_event_whitespace_outside_transaction(): - @validate_log_event_count_outside_transaction(0) +def test_attributes_disabled_outside_transaction(): + @validate_log_events_outside_transaction([{"message": "A"}], forgone_attrs=["context.key"]) + @validate_log_event_count_outside_transaction(1) def test(): - exercise_record_log_event(" ") + record_log_event("A", attributes={"key": "value"}) test() -@enable_log_forwarding -def test_ignored_transaction_logs_not_forwarded(): - @validate_log_event_count(0) +_test_record_log_event_context_attribute_filtering_params = [ + ("", "", "A", True), + ("", "A", "A", False), + ("", "A", "B", True), + ("A B", "*", "A", True), + ("A B", "*", "B", True), + ("A B", "*", "C", False), + ("A B", "C", "A", True), + ("A B", "C", "C", False), + ("A B", "B", "A", True), + ("A B", "B", "B", False), + ("A", "A *", "A", False), + ("A", "A *", "B", False), + ("A*", "", "A", True), + ("A*", "", "AB", True), + ("", "A*", "A", False), + ("", "A*", "B", True), + ("A*", "AB", "AC", True), + ("A*", "AB", "AB", False), + ("AB", "A*", "AB", True), + ("A*", "AB*", "ACB", True), + ("A*", "AB*", "ABC", False), +] + + +@pytest.mark.parametrize("prefix", ("context", "message")) +@pytest.mark.parametrize("include,exclude,attr,expected", _test_record_log_event_context_attribute_filtering_params) +def test_record_log_event_context_attribute_filtering_inside_transaction(include, exclude, attr, expected, prefix): + if expected: + expected_event = {"required_attrs": [".".join((prefix, attr))]} + else: + expected_event = {"forgone_attrs": [".".join((prefix, attr))]} + + @override_application_settings( + { + "application_logging.forwarding.enabled": True, + "application_logging.forwarding.context_data.enabled": True, + "application_logging.forwarding.context_data.include": _parse_attributes(include), + "application_logging.forwarding.context_data.exclude": _parse_attributes(exclude), + } + ) + @validate_log_events(**expected_event) + @validate_log_event_count(1) @background_task() def test(): - ignore_transaction() - exercise_record_log_event() + if prefix == "context": + record_log_event("A", attributes={attr: 1}) + else: + record_log_event({"message": "A", attr: 1}) test() -_test_log_event_truncation_events = [{"message": "A" * 32768, "level": "ERROR"}] -_test_log_event_truncation_events[0].update(_common_attributes_trace_linking) - -@enable_log_forwarding -def test_log_event_truncation(): - @validate_log_events(_test_log_event_truncation_events) - @validate_log_event_count(1) - @background_task() +@pytest.mark.parametrize("prefix", ("context", "message")) +@pytest.mark.parametrize("include,exclude,attr,expected", _test_record_log_event_context_attribute_filtering_params) +@reset_core_stats_engine() +def test_record_log_event_context_attribute_filtering_outside_transaction(include, exclude, attr, expected, prefix): + if expected: + expected_event = {"required_attrs": [".".join((prefix, attr))]} + else: + expected_event = {"forgone_attrs": [".".join((prefix, attr))]} + + @override_application_settings( + { + "application_logging.forwarding.enabled": True, + "application_logging.forwarding.context_data.enabled": True, + "application_logging.forwarding.context_data.include": _parse_attributes(include), + "application_logging.forwarding.context_data.exclude": _parse_attributes(exclude), + } + ) + @validate_log_events_outside_transaction(**expected_event) + @validate_log_event_count_outside_transaction(1) def test(): - exercise_record_log_event("A" * 33000) + if prefix == "context": + record_log_event("A", attributes={attr: 1}) + else: + record_log_event({"message": "A", attr: 1}) test() -@disable_log_forwarding -def test_record_log_event_inside_transaction(): - @validate_log_event_count(0) +_test_record_log_event_linking_attribute_no_filtering_params = [ + ("", ""), + ("", "entity.name"), + ("", "*"), +] + + +@pytest.mark.parametrize("include,exclude", _test_record_log_event_linking_attribute_no_filtering_params) +def test_record_log_event_linking_attribute_no_filtering_inside_transaction(include, exclude): + attr = "entity.name" + + @override_application_settings( + { + "application_logging.forwarding.enabled": True, + "application_logging.forwarding.context_data.enabled": True, + "application_logging.forwarding.context_data.include": _parse_attributes(include), + "application_logging.forwarding.context_data.exclude": _parse_attributes(exclude), + } + ) + @validate_log_events(required_attrs=[attr]) + @validate_log_event_count(1) @background_task() def test(): - exercise_record_log_event() - + record_log_event("A") + test() -@disable_log_forwarding +@pytest.mark.parametrize("include,exclude", _test_record_log_event_linking_attribute_no_filtering_params) @reset_core_stats_engine() -def test_record_log_event_outside_transaction(): - @validate_log_event_count_outside_transaction(0) +def test_record_log_event_linking_attribute_filtering_outside_transaction(include, exclude): + attr = "entity.name" + + @override_application_settings( + { + "application_logging.forwarding.enabled": True, + "application_logging.forwarding.context_data.enabled": True, + "application_logging.forwarding.context_data.include": _parse_attributes(include), + "application_logging.forwarding.context_data.exclude": _parse_attributes(exclude), + } + ) + @validate_log_events_outside_transaction(required_attrs=[attr]) + @validate_log_event_count_outside_transaction(1) def test(): - exercise_record_log_event() + record_log_event("A") test() diff --git a/tests/agent_features/test_logs_in_context.py b/tests/agent_features/test_logs_in_context.py index 90b6c92672..8693c0f083 100644 --- a/tests/agent_features/test_logs_in_context.py +++ b/tests/agent_features/test_logs_in_context.py @@ -51,8 +51,14 @@ class NonPrintableObject(object): def __str__(self): raise RuntimeError("Unable to print object.") - def __repr__(self): - raise RuntimeError("Unable to print object.") + __repr__ = __str__ + + +class NonSerializableObject(object): + def __str__(self): + return "<%s object>" % self.__class__.__name__ + + __repr__ = __str__ def test_newrelic_logger_no_error(log_buffer): @@ -63,14 +69,15 @@ def test_newrelic_logger_no_error(log_buffer): "null": None, "array": [1, 2, 3], "bool": True, - "non_serializable": {"set"}, + "set": {"set"}, + "non_serializable": NonSerializableObject(), "non_printable": NonPrintableObject(), "object": { "first": "bar", "second": "baz", }, } - _logger.info(u"Hello %s", u"World", extra=extra) + _logger.info("Hello %s", "World", extra=extra) log_buffer.seek(0) message = json.load(log_buffer) @@ -88,24 +95,25 @@ def test_newrelic_logger_no_error(log_buffer): assert isinstance(line_number, int) expected = { - u"entity.name": u"Python Agent Test (agent_features)", - u"entity.type": u"SERVICE", - u"message": u"Hello World", - u"log.level": u"INFO", - u"logger.name": u"test_logs_in_context", - u"thread.name": u"MainThread", - u"process.name": u"MainProcess", - u"extra.string": u"foo", - u"extra.integer": 1, - u"extra.float": 1.23, - u"extra.null": None, - u"extra.array": [1, 2, 3], - u"extra.bool": True, - u"extra.non_serializable": u"set(['set'])" if six.PY2 else u"{'set'}", - u"extra.non_printable": u"", - u"extra.object": { - u"first": u"bar", - u"second": u"baz", + "entity.name": "Python Agent Test (agent_features)", + "entity.type": "SERVICE", + "message": "Hello World", + "log.level": "INFO", + "logger.name": "test_logs_in_context", + "thread.name": "MainThread", + "process.name": "MainProcess", + "extra.string": "foo", + "extra.integer": 1, + "extra.float": 1.23, + "extra.null": None, + "extra.array": [1, 2, 3], + "extra.bool": True, + "extra.set": '["set"]', + "extra.non_serializable": "", + "extra.non_printable": "", + "extra.object": { + "first": "bar", + "second": "baz", }, } expected_extra_txn_keys = ( @@ -119,7 +127,6 @@ def test_newrelic_logger_no_error(log_buffer): assert set(message.keys()) == set(expected_extra_txn_keys) - class ExceptionForTest(ValueError): pass @@ -129,7 +136,7 @@ def test_newrelic_logger_error_inside_transaction(log_buffer): try: raise ExceptionForTest except ExceptionForTest: - _logger.exception(u"oops") + _logger.exception("oops") log_buffer.seek(0) message = json.load(log_buffer) @@ -147,16 +154,16 @@ def test_newrelic_logger_error_inside_transaction(log_buffer): assert isinstance(line_number, int) expected = { - u"entity.name": u"Python Agent Test (agent_features)", - u"entity.type": u"SERVICE", - u"message": u"oops", - u"log.level": u"ERROR", - u"logger.name": u"test_logs_in_context", - u"thread.name": u"MainThread", - u"process.name": u"MainProcess", - u"error.class": u"test_logs_in_context:ExceptionForTest", - u"error.message": u"", - u"error.expected": False, + "entity.name": "Python Agent Test (agent_features)", + "entity.type": "SERVICE", + "message": "oops", + "log.level": "ERROR", + "logger.name": "test_logs_in_context", + "thread.name": "MainThread", + "process.name": "MainProcess", + "error.class": "test_logs_in_context:ExceptionForTest", + "error.message": "", + "error.expected": False, } expected_extra_txn_keys = ( "trace.id", @@ -175,7 +182,7 @@ def test_newrelic_logger_error_outside_transaction(log_buffer): try: raise ExceptionForTest except ExceptionForTest: - _logger.exception(u"oops") + _logger.exception("oops") log_buffer.seek(0) message = json.load(log_buffer) @@ -193,15 +200,15 @@ def test_newrelic_logger_error_outside_transaction(log_buffer): assert isinstance(line_number, int) expected = { - u"entity.name": u"Python Agent Test (agent_features)", - u"entity.type": u"SERVICE", - u"message": u"oops", - u"log.level": u"ERROR", - u"logger.name": u"test_logs_in_context", - u"thread.name": u"MainThread", - u"process.name": u"MainProcess", - u"error.class": u"test_logs_in_context:ExceptionForTest", - u"error.message": u"", + "entity.name": "Python Agent Test (agent_features)", + "entity.type": "SERVICE", + "message": "oops", + "log.level": "ERROR", + "logger.name": "test_logs_in_context", + "thread.name": "MainThread", + "process.name": "MainProcess", + "error.class": "test_logs_in_context:ExceptionForTest", + "error.message": "", } expected_extra_txn_keys = ( "entity.guid", @@ -214,14 +221,13 @@ def test_newrelic_logger_error_outside_transaction(log_buffer): assert set(message.keys()) == set(expected_extra_txn_keys) - EXPECTED_KEYS_TXN = ( "trace.id", "span.id", "entity.name", "entity.type", "entity.guid", - "hostname", + "hostname", ) EXPECTED_KEYS_NO_TXN = EXPECTED_KEYS_TRACE_ENDED = ( diff --git a/tests/agent_features/test_ml_events.py b/tests/agent_features/test_ml_events.py index b2a77624fe..96bb95f95e 100644 --- a/tests/agent_features/test_ml_events.py +++ b/tests/agent_features/test_ml_events.py @@ -15,7 +15,7 @@ import time import pytest -from testing_support.fixtures import ( # function_not_called,; override_application_settings, +from testing_support.fixtures import ( function_not_called, override_application_settings, reset_core_stats_engine, diff --git a/tests/agent_features/test_serverless_mode.py b/tests/agent_features/test_serverless_mode.py index 189481f705..6114102bf6 100644 --- a/tests/agent_features/test_serverless_mode.py +++ b/tests/agent_features/test_serverless_mode.py @@ -151,6 +151,8 @@ def _test_inbound_dt_payload_acceptance(): _test_inbound_dt_payload_acceptance() +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") @pytest.mark.parametrize("arn_set", (True, False)) def test_payload_metadata_arn(serverless_application, arn_set): # If the session object gathers the arn from the settings object before the diff --git a/tests/agent_features/test_synthetics.py b/tests/agent_features/test_synthetics.py index 2e08144cc7..350cab03f0 100644 --- a/tests/agent_features/test_synthetics.py +++ b/tests/agent_features/test_synthetics.py @@ -17,7 +17,7 @@ from testing_support.external_fixtures import validate_synthetics_external_trace_header from testing_support.fixtures import ( cat_enabled, - make_synthetics_header, + make_synthetics_headers, override_application_settings, ) from testing_support.validators.validate_synthetics_event import ( @@ -37,6 +37,9 @@ SYNTHETICS_RESOURCE_ID = "09845779-16ef-4fa7-b7f2-44da8e62931c" SYNTHETICS_JOB_ID = "8c7dd3ba-4933-4cbb-b1ed-b62f511782f4" SYNTHETICS_MONITOR_ID = "dc452ae9-1a93-4ab5-8a33-600521e9cd00" +SYNTHETICS_TYPE = "scheduled" +SYNTHETICS_INITIATOR = "graphql" +SYNTHETICS_ATTRIBUTES = {"exampleAttribute": "1"} _override_settings = { "encoding_key": ENCODING_KEY, @@ -45,15 +48,19 @@ } -def _make_synthetics_header( +def _make_synthetics_headers( version="1", account_id=ACCOUNT_ID, resource_id=SYNTHETICS_RESOURCE_ID, job_id=SYNTHETICS_JOB_ID, monitor_id=SYNTHETICS_MONITOR_ID, encoding_key=ENCODING_KEY, + info_version="1", + type_=SYNTHETICS_TYPE, + initiator=SYNTHETICS_INITIATOR, + attributes=SYNTHETICS_ATTRIBUTES, ): - return make_synthetics_header(account_id, resource_id, job_id, monitor_id, encoding_key, version) + return make_synthetics_headers(encoding_key, account_id, resource_id, job_id, monitor_id, type_, initiator, attributes, synthetics_version=version, synthetics_info_version=info_version) def decode_header(header, encoding_key=ENCODING_KEY): @@ -80,6 +87,9 @@ def target_wsgi_application(environ, start_response): ("nr.syntheticsResourceId", SYNTHETICS_RESOURCE_ID), ("nr.syntheticsJobId", SYNTHETICS_JOB_ID), ("nr.syntheticsMonitorId", SYNTHETICS_MONITOR_ID), + ("nr.syntheticsType", SYNTHETICS_TYPE), + ("nr.syntheticsInitiator", SYNTHETICS_INITIATOR), + ("nr.syntheticsExampleAttribute", "1"), ] _test_valid_synthetics_event_forgone = [] @@ -89,21 +99,51 @@ def target_wsgi_application(environ, start_response): ) @override_application_settings(_override_settings) def test_valid_synthetics_event(): - headers = _make_synthetics_header() + headers = _make_synthetics_headers() + response = target_application.get("/", headers=headers) + + +_test_valid_synthetics_event_without_info_required = [ + ("nr.syntheticsResourceId", SYNTHETICS_RESOURCE_ID), + ("nr.syntheticsJobId", SYNTHETICS_JOB_ID), + ("nr.syntheticsMonitorId", SYNTHETICS_MONITOR_ID), +] +_test_valid_synthetics_event_without_info_forgone = [ + "nr.syntheticsType", + "nr.syntheticsInitiator", + "nr.syntheticsExampleAttribute", +] + + +@validate_synthetics_event( + _test_valid_synthetics_event_without_info_required, _test_valid_synthetics_event_without_info_forgone, should_exist=True +) +@override_application_settings(_override_settings) +def test_valid_synthetics_event_without_info(): + headers = _make_synthetics_headers(type_=None, initiator=None, attributes=None) response = target_application.get("/", headers=headers) @validate_synthetics_event([], [], should_exist=False) @override_application_settings(_override_settings) def test_no_synthetics_event_unsupported_version(): - headers = _make_synthetics_header(version="0") + headers = _make_synthetics_headers(version="0") + response = target_application.get("/", headers=headers) + + +@validate_synthetics_event( + _test_valid_synthetics_event_without_info_required, _test_valid_synthetics_event_without_info_forgone, should_exist=True +) +@override_application_settings(_override_settings) +def test_synthetics_event_unsupported_info_version(): + headers = _make_synthetics_headers(info_version="0") response = target_application.get("/", headers=headers) @validate_synthetics_event([], [], should_exist=False) @override_application_settings(_override_settings) def test_no_synthetics_event_untrusted_account(): - headers = _make_synthetics_header(account_id="999") + headers = _make_synthetics_headers(account_id="999") response = target_application.get("/", headers=headers) @@ -111,7 +151,20 @@ def test_no_synthetics_event_untrusted_account(): @override_application_settings(_override_settings) def test_no_synthetics_event_mismatched_encoding_key(): encoding_key = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" - headers = _make_synthetics_header(encoding_key=encoding_key) + headers = _make_synthetics_headers(encoding_key=encoding_key) + response = target_application.get("/", headers=headers) + + +@validate_synthetics_event( + _test_valid_synthetics_event_without_info_required, _test_valid_synthetics_event_without_info_forgone, should_exist=True +) +@override_application_settings(_override_settings) +def test_synthetics_event_mismatched_info_encoding_key(): + encoding_key = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" + headers = { + "X-NewRelic-Synthetics": _make_synthetics_headers(type_=None)["X-NewRelic-Synthetics"], + "X-NewRelic-Synthetics-Info": _make_synthetics_headers(encoding_key=encoding_key)["X-NewRelic-Synthetics-Info"], + } response = target_application.get("/", headers=headers) @@ -119,6 +172,9 @@ def test_no_synthetics_event_mismatched_encoding_key(): "synthetics_resource_id": SYNTHETICS_RESOURCE_ID, "synthetics_job_id": SYNTHETICS_JOB_ID, "synthetics_monitor_id": SYNTHETICS_MONITOR_ID, + "synthetics_type": SYNTHETICS_TYPE, + "synthetics_initiator": SYNTHETICS_INITIATOR, + "synthetics_example_attribute": "1", } @@ -126,7 +182,7 @@ def test_no_synthetics_event_mismatched_encoding_key(): @validate_synthetics_transaction_trace(_test_valid_synthetics_tt_required) @override_application_settings(_override_settings) def test_valid_synthetics_in_transaction_trace(): - headers = _make_synthetics_header() + headers = _make_synthetics_headers() response = target_application.get("/", headers=headers) @@ -146,26 +202,36 @@ def test_no_synthetics_in_transaction_trace(): @validate_synthetics_event([], [], should_exist=False) @override_application_settings(_disabled_settings) def test_synthetics_disabled(): - headers = _make_synthetics_header() + headers = _make_synthetics_headers() response = target_application.get("/", headers=headers) -_external_synthetics_header = ("X-NewRelic-Synthetics", _make_synthetics_header()["X-NewRelic-Synthetics"]) +_external_synthetics_headers = _make_synthetics_headers() +_external_synthetics_header = _external_synthetics_headers["X-NewRelic-Synthetics"] +_external_synthetics_info_header = _external_synthetics_headers["X-NewRelic-Synthetics-Info"] @cat_enabled -@validate_synthetics_external_trace_header(required_header=_external_synthetics_header, should_exist=True) +@validate_synthetics_external_trace_header(_external_synthetics_header, _external_synthetics_info_header) @override_application_settings(_override_settings) def test_valid_synthetics_external_trace_header(): - headers = _make_synthetics_header() + headers = _make_synthetics_headers() + response = target_application.get("/", headers=headers) + + +@cat_enabled +@validate_synthetics_external_trace_header(_external_synthetics_header, None) +@override_application_settings(_override_settings) +def test_valid_synthetics_external_trace_header_without_info(): + headers = _make_synthetics_headers(type_=None) response = target_application.get("/", headers=headers) @cat_enabled -@validate_synthetics_external_trace_header(required_header=_external_synthetics_header, should_exist=True) +@validate_synthetics_external_trace_header(_external_synthetics_header, _external_synthetics_info_header) @override_application_settings(_override_settings) def test_valid_external_trace_header_with_byte_inbound_header(): - headers = _make_synthetics_header() + headers = _make_synthetics_headers() headers = {k.encode("utf-8"): v.encode("utf-8") for k, v in headers.items()} @web_transaction( @@ -178,7 +244,7 @@ def webapp(): webapp() -@validate_synthetics_external_trace_header(should_exist=False) +@validate_synthetics_external_trace_header(None, None) @override_application_settings(_override_settings) def test_no_synthetics_external_trace_header(): response = target_application.get("/") @@ -194,7 +260,7 @@ def _synthetics_limit_test(num_requests, num_events, num_transactions): # Send requests - headers = _make_synthetics_header() + headers = _make_synthetics_headers() for i in range(num_requests): response = target_application.get("/", headers=headers) diff --git a/tests/agent_features/test_transaction_event_data_and_some_browser_stuff_too.py b/tests/agent_features/test_transaction_event_data_and_some_browser_stuff_too.py index 73bdfcf535..c1d9283c25 100644 --- a/tests/agent_features/test_transaction_event_data_and_some_browser_stuff_too.py +++ b/tests/agent_features/test_transaction_event_data_and_some_browser_stuff_too.py @@ -59,7 +59,6 @@ def test_capture_attributes_enabled(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -71,10 +70,10 @@ def test_capture_attributes_enabled(): assert header.find("NREUM") != -1 - # Now validate the various fields of the footer related to analytics. + # Now validate the various fields of the header related to analytics. # The fields are held by a JSON dictionary. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) obfuscation_key = settings.license_key[:13] @@ -116,7 +115,6 @@ def test_no_attributes_recorded(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -128,13 +126,13 @@ def test_no_attributes_recorded(): assert header.find("NREUM") != -1 - # Now validate the various fields of the footer related to analytics. + # Now validate the various fields of the header related to analytics. # The fields are held by a JSON dictionary. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) # As we are not recording any user or agent attributes, we should not - # actually have an entry at all in the footer. + # actually have an entry at all in the header. assert "atts" not in data @@ -163,7 +161,6 @@ def test_analytic_events_capture_attributes_disabled(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -178,7 +175,7 @@ def test_analytic_events_capture_attributes_disabled(): # Now validate that attributes are present, since browser monitoring should # be enabled. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "atts" in data @@ -196,7 +193,6 @@ def test_capture_attributes_default(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -211,7 +207,7 @@ def test_capture_attributes_default(): # Now validate that attributes are not present, since should # be disabled. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "atts" not in data @@ -258,7 +254,6 @@ def test_capture_attributes_disabled(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -273,7 +268,7 @@ def test_capture_attributes_disabled(): # Now validate that attributes are not present, since should # be disabled. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "atts" not in data @@ -307,7 +302,6 @@ def test_collect_analytic_events_disabled(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -322,7 +316,7 @@ def test_collect_analytic_events_disabled(): # Now validate that attributes are present, since should # be enabled. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "atts" in data @@ -351,7 +345,6 @@ def test_analytic_events_disabled(): header = response.html.html.head.script.string content = response.html.html.body.p.string - footer = response.html.html.body.script.string # Validate actual body content. @@ -366,7 +359,7 @@ def test_analytic_events_disabled(): # Now validate that attributes are present, since should # be enabled. - data = json.loads(footer.split("NREUM.info=")[1]) + data = json.loads(header.split("NREUM.info=")[1].split(";\n")[0]) assert "atts" in data diff --git a/tests/agent_streaming/test_infinite_tracing.py b/tests/agent_streaming/test_infinite_tracing.py index f1119c38cd..59060347e1 100644 --- a/tests/agent_streaming/test_infinite_tracing.py +++ b/tests/agent_streaming/test_infinite_tracing.py @@ -389,12 +389,12 @@ def _test(): # Wait for OK status code to close the channel start_time = time.time() while not (request_iterator._stream and request_iterator._stream.done()): - assert time.time() - start_time < 5, "Timed out waiting for OK status code." + assert time.time() - start_time < 15, "Timed out waiting for OK status code." time.sleep(0.5) # Put new span and wait until buffer has been emptied and either sent or lost stream_buffer.put(span) - assert spans_processed_event.wait(timeout=5), "Data lost in stream buffer iterator." + assert spans_processed_event.wait(timeout=15), "Data lost in stream buffer iterator." _test() diff --git a/tests/agent_unittests/test_encoding_utils.py b/tests/agent_unittests/test_encoding_utils.py new file mode 100644 index 0000000000..397f2fa2ef --- /dev/null +++ b/tests/agent_unittests/test_encoding_utils.py @@ -0,0 +1,52 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from newrelic.common.encoding_utils import camel_case, snake_case + + +@pytest.mark.parametrize("input_,expected,upper", [ + ("", "", False), + ("", "", True), + ("my_string", "myString", False), + ("my_string", "MyString", True), + ("LeaveCase", "LeaveCase", False), + ("correctCase", "CorrectCase", True), + ("UPPERcaseLETTERS", "UPPERcaseLETTERS", False), + ("UPPERcaseLETTERS", "UPPERcaseLETTERS", True), + ("lowerCASEletters", "lowerCASEletters", False), + ("lowerCASEletters", "LowerCASEletters", True), + ("very_long_snake_string", "VeryLongSnakeString", True), + ("kebab-case", "kebab-case", False), +]) +def test_camel_case(input_, expected, upper): + output = camel_case(input_, upper=upper) + assert output == expected + + +@pytest.mark.parametrize("input_,expected", [ + ("", ""), + ("", ""), + ("my_string", "my_string"), + ("myString", "my_string"), + ("MyString", "my_string"), + ("UPPERcaseLETTERS", "uppercase_letters"), + ("lowerCASEletters", "lower_caseletters"), + ("VeryLongCamelString", "very_long_camel_string"), + ("kebab-case", "kebab-case"), +]) +def test_snake_case(input_, expected): + output = snake_case(input_) + assert output == expected diff --git a/tests/agent_unittests/test_environment.py b/tests/agent_unittests/test_environment.py index b2c639adc2..84dd753a9a 100644 --- a/tests/agent_unittests/test_environment.py +++ b/tests/agent_unittests/test_environment.py @@ -15,9 +15,13 @@ import sys import pytest +from testing_support.fixtures import override_generic_settings +from newrelic.core.config import global_settings from newrelic.core.environment import environment_settings +settings = global_settings() + def module(version): class Module(object): @@ -47,6 +51,23 @@ def test_plugin_list(): assert "pytest (%s)" % (pytest.__version__) in plugin_list +@override_generic_settings(settings, {"package_reporting.enabled": False}) +def test_plugin_list_when_package_reporting_disabled(): + # Let's pretend we fired an import hook + import newrelic.hooks.adapter_gunicorn # noqa: F401 + + environment_info = environment_settings() + + for key, plugin_list in environment_info: + if key == "Plugin List": + break + else: + assert False, "'Plugin List' not found" + + # Check that bogus plugins don't get reported + assert plugin_list == [] + + class NoIteratorDict(object): def __init__(self, d): self.d = d diff --git a/tests/agent_unittests/test_harvest_loop.py b/tests/agent_unittests/test_harvest_loop.py index 15b67a81e1..a3eaf7b5ff 100644 --- a/tests/agent_unittests/test_harvest_loop.py +++ b/tests/agent_unittests/test_harvest_loop.py @@ -143,6 +143,10 @@ def transaction_node(request): synthetics_job_id=None, synthetics_monitor_id=None, synthetics_header=None, + synthetics_type=None, + synthetics_initiator=None, + synthetics_attributes=None, + synthetics_info_header=None, is_part_of_cat=False, trip_id="4485b89db608aece", path_hash=None, diff --git a/tests/agent_unittests/test_package_version_utils.py b/tests/agent_unittests/test_package_version_utils.py index b57c91aa60..ccfef670b6 100644 --- a/tests/agent_unittests/test_package_version_utils.py +++ b/tests/agent_unittests/test_package_version_utils.py @@ -16,7 +16,6 @@ import warnings import pytest -import six from testing_support.validators.validate_function_called import validate_function_called from newrelic.common.package_version_utils import ( @@ -26,6 +25,7 @@ get_package_version, get_package_version_tuple, ) +from newrelic.packages import six # Notes: # importlib.metadata was a provisional addition to the std library in PY38 and PY39 diff --git a/tests/agent_unittests/test_wrappers.py b/tests/agent_unittests/test_wrappers.py new file mode 100644 index 0000000000..eccee4df5b --- /dev/null +++ b/tests/agent_unittests/test_wrappers.py @@ -0,0 +1,81 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from newrelic.common.object_wrapper import function_wrapper + + +@pytest.fixture(scope="function") +def wrapper(): + @function_wrapper + def _wrapper(wrapped, instance, args, kwargs): + return wrapped(*args, **kwargs) + + return _wrapper + + +@pytest.fixture(scope="function") +def wrapped_function(wrapper): + @wrapper + def wrapped(): + return True + + return wrapped + + +def test_nr_prefix_attributes(wrapped_function): + wrapped_function._nr_attr = 1 + vars_ = vars(wrapped_function) + + assert wrapped_function._nr_attr == 1, "_nr_ attributes should be stored on wrapper object and retrievable." + assert "_nr_attr" not in vars_, "_nr_ attributes should NOT appear in __dict__." + + +def test_self_prefix_attributes(wrapped_function): + wrapped_function._self_attr = 1 + vars_ = vars(wrapped_function) + + assert wrapped_function._self_attr == 1, "_self_ attributes should be stored on wrapper object and retrievable." + assert "_nr_attr" not in vars_, "_self_ attributes should NOT appear in __dict__." + + +def test_prefixed_attributes_share_namespace(wrapped_function): + wrapped_function._nr_attr = 1 + wrapped_function._self_attr = 2 + + assert ( + wrapped_function._nr_attr == 2 + ), "_nr_ attributes share a namespace with _self_ attributes and should be overwritten." + + +def test_wrapped_function_attributes(wrapped_function): + wrapped_function._other_attr = 1 + vars_ = vars(wrapped_function) + + assert wrapped_function._other_attr == 1, "All other attributes should be stored on wrapped object and retrievable." + assert "_other_attr" in vars_, "Other types of attributes SHOULD appear in __dict__." + + assert wrapped_function() + + +def test_multiple_wrapper_last_object(wrapper): + def wrapped(): + pass + + wrapper_1 = wrapper(wrapped) + wrapper_2 = wrapper(wrapper_1) + + assert wrapper_2._nr_last_object is wrapped, "Last object in chain should be the wrapped function." + assert wrapper_2._nr_next_object is wrapper_1, "Next object in chain should be the middle function." diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/README.md b/tests/cross_agent/fixtures/docker_container_id_v2/README.md new file mode 100644 index 0000000000..ea6cc25035 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/README.md @@ -0,0 +1,6 @@ +These tests cover parsing of Docker container IDs on Linux hosts out of +`/proc/self/mountinfo` (or `/proc//mountinfo` more generally). + +The `cases.json` file lists each filename in this directory containing +example `/proc/self/mountinfo` content, and the expected Docker container ID that +should be parsed from that file. diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/cases.json b/tests/cross_agent/fixtures/docker_container_id_v2/cases.json new file mode 100644 index 0000000000..83d6360a31 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/cases.json @@ -0,0 +1,36 @@ +[ + { + "filename": "docker-20.10.16.txt", + "containerId": "84cf3472a20d1bfb4b50e48b6ff50d96dfcd812652d76dd907951e6f98997bce", + "expectedMetrics": null + }, + { + "filename": "docker-24.0.2.txt", + "containerId": "b0a24eed1b031271d8ba0784b8f354b3da892dfd08bbcf14dd7e8a1cf9292f65", + "expectedMetrics": null + }, + { + "filename": "empty.txt", + "containerId": null, + "expectedMetrics": null + }, + { + "filename": "invalid-characters.txt", + "containerId": null, + "expectedMetrics": null + }, + { + "filename": "docker-too-long.txt", + "containerId": null, + "expectedMetrics": null + }, + { + "filename": "invalid-length.txt", + "containerId": null, + "expectedMetrics": { + "Supportability/utilization/docker/error": { + "callCount": 1 + } + } + } +] diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/docker-20.10.16.txt b/tests/cross_agent/fixtures/docker_container_id_v2/docker-20.10.16.txt new file mode 100644 index 0000000000..ce2b1bedf6 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/docker-20.10.16.txt @@ -0,0 +1,24 @@ +519 413 0:152 / / rw,relatime master:180 - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/YCID3333O5VYPYDNTQRZX4GI67:/var/lib/docker/overlay2/l/G7H4TULAFM2UBFRL7QFQPUNXY5:/var/lib/docker/overlay2/l/RLC4GCL75VGXXXYJJO57STHIYN:/var/lib/docker/overlay2/l/YOZKNWFAP6YX74XEKPHX4KG4UN:/var/lib/docker/overlay2/l/46EQ6YX5PQQZ4Z3WCSMQ6Z4YWI:/var/lib/docker/overlay2/l/KGKX3Z5ZMOCDWOFKBS2FSHMQMQ:/var/lib/docker/overlay2/l/CKFYAF4TXZD4RCE6RG6UNL5WVI,upperdir=/var/lib/docker/overlay2/358c429f7b04ee5a228b94efaebe3413a98fcc676b726f078fe875727e3bddd2/diff,workdir=/var/lib/docker/overlay2/358c429f7b04ee5a228b94efaebe3413a98fcc676b726f078fe875727e3bddd2/work +520 519 0:155 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw +521 519 0:156 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +522 521 0:157 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +523 519 0:158 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro +524 523 0:30 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - cgroup2 cgroup rw +525 521 0:154 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw +526 521 0:159 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k +527 519 254:1 /docker/volumes/3237dea4f8022f1addd7b6f072a9c847eb3e5b8df0d599f462ba7040884d4618/_data /data rw,relatime master:28 - ext4 /dev/vda1 rw +528 519 254:1 /docker/containers/84cf3472a20d1bfb4b50e48b6ff50d96dfcd812652d76dd907951e6f98997bce/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw +529 519 254:1 /docker/containers/84cf3472a20d1bfb4b50e48b6ff50d96dfcd812652d76dd907951e6f98997bce/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw +530 519 254:1 /docker/containers/84cf3472a20d1bfb4b50e48b6ff50d96dfcd812652d76dd907951e6f98997bce/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw +414 521 0:157 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +415 520 0:155 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw +416 520 0:155 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw +417 520 0:155 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw +418 520 0:155 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw +419 520 0:155 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw +420 520 0:160 / /proc/acpi ro,relatime - tmpfs tmpfs ro +421 520 0:156 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +422 520 0:156 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +423 520 0:156 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +424 520 0:156 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +425 523 0:161 / /sys/firmware ro,relatime - tmpfs tmpfs ro diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/docker-24.0.2.txt b/tests/cross_agent/fixtures/docker_container_id_v2/docker-24.0.2.txt new file mode 100644 index 0000000000..1725e7726a --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/docker-24.0.2.txt @@ -0,0 +1,21 @@ +1014 1013 0:269 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw +1019 1013 0:270 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +1020 1019 0:271 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +1021 1013 0:272 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro +1022 1021 0:30 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - cgroup2 cgroup rw +1023 1019 0:268 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw +1024 1019 0:273 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k +1025 1013 254:1 /docker/containers/b0a24eed1b031271d8ba0784b8f354b3da892dfd08bbcf14dd7e8a1cf9292f65/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,discard +1026 1013 254:1 /docker/containers/b0a24eed1b031271d8ba0784b8f354b3da892dfd08bbcf14dd7e8a1cf9292f65/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,discard +1027 1013 254:1 /docker/containers/b0a24eed1b031271d8ba0784b8f354b3da892dfd08bbcf14dd7e8a1cf9292f65/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,discard +717 1019 0:271 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +718 1014 0:269 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw +719 1014 0:269 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw +720 1014 0:269 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw +721 1014 0:269 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw +723 1014 0:269 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw +726 1014 0:274 / /proc/acpi ro,relatime - tmpfs tmpfs ro +727 1014 0:270 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +728 1014 0:270 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +729 1014 0:270 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +730 1021 0:275 / /sys/firmware ro,relatime - tmpfs tmpfs ro diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/docker-too-long.txt b/tests/cross_agent/fixtures/docker_container_id_v2/docker-too-long.txt new file mode 100644 index 0000000000..608eaf7a49 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/docker-too-long.txt @@ -0,0 +1,21 @@ +1014 1013 0:269 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw +1019 1013 0:270 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +1020 1019 0:271 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +1021 1013 0:272 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro +1022 1021 0:30 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - cgroup2 cgroup rw +1023 1019 0:268 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw +1024 1019 0:273 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k +1025 1013 254:1 /docker/containers/3ccfa00432798ff38f85839de1e396f771b4acbe9f4ddea0a761c39b9790a7821/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,discard +1026 1013 254:1 /docker/containers/3ccfa00432798ff38f85839de1e396f771b4acbe9f4ddea0a761c39b9790a7821/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,discard +1027 1013 254:1 /docker/containers/3ccfa00432798ff38f85839de1e396f771b4acbe9f4ddea0a761c39b9790a7821/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,discard +717 1019 0:271 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +718 1014 0:269 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw +719 1014 0:269 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw +720 1014 0:269 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw +721 1014 0:269 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw +723 1014 0:269 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw +726 1014 0:274 / /proc/acpi ro,relatime - tmpfs tmpfs ro +727 1014 0:270 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +728 1014 0:270 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +729 1014 0:270 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +730 1021 0:275 / /sys/firmware ro,relatime - tmpfs tmpfs ro diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/empty.txt b/tests/cross_agent/fixtures/docker_container_id_v2/empty.txt new file mode 100644 index 0000000000..e69de29bb2 diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/invalid-characters.txt b/tests/cross_agent/fixtures/docker_container_id_v2/invalid-characters.txt new file mode 100644 index 0000000000..b561475ac6 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/invalid-characters.txt @@ -0,0 +1,21 @@ +1014 1013 0:269 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw +1019 1013 0:270 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +1020 1019 0:271 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +1021 1013 0:272 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro +1022 1021 0:30 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - cgroup2 cgroup rw +1023 1019 0:268 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw +1024 1019 0:273 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k +1025 1013 254:1 /docker/containers/WRONGINCORRECTINVALIDCHARSERRONEOUSBADPHONYBROKEN2TERRIBLENOPE55/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,discard +1026 1013 254:1 /docker/containers/WRONGINCORRECTINVALIDCHARSERRONEOUSBADPHONYBROKEN2TERRIBLENOPE55/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,discard +1027 1013 254:1 /docker/containers/WRONGINCORRECTINVALIDCHARSERRONEOUSBADPHONYBROKEN2TERRIBLENOPE55/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,discard +717 1019 0:271 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +718 1014 0:269 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw +719 1014 0:269 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw +720 1014 0:269 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw +721 1014 0:269 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw +723 1014 0:269 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw +726 1014 0:274 / /proc/acpi ro,relatime - tmpfs tmpfs ro +727 1014 0:270 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +728 1014 0:270 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +729 1014 0:270 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +730 1021 0:275 / /sys/firmware ro,relatime - tmpfs tmpfs ro diff --git a/tests/cross_agent/fixtures/docker_container_id_v2/invalid-length.txt b/tests/cross_agent/fixtures/docker_container_id_v2/invalid-length.txt new file mode 100644 index 0000000000..a8987df707 --- /dev/null +++ b/tests/cross_agent/fixtures/docker_container_id_v2/invalid-length.txt @@ -0,0 +1,21 @@ +1014 1013 0:269 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw +1019 1013 0:270 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +1020 1019 0:271 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +1021 1013 0:272 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro +1022 1021 0:30 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - cgroup2 cgroup rw +1023 1019 0:268 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw +1024 1019 0:273 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k +1025 1013 254:1 /docker/containers/47cbd16b77c5/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,discard +1026 1013 254:1 /docker/containers/47cbd16b77c5/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,discard +1027 1013 254:1 /docker/containers/47cbd16b77c5/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,discard +717 1019 0:271 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 +718 1014 0:269 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw +719 1014 0:269 /fs /proc/fs ro,nosuid,nodev,noexec,relatime - proc proc rw +720 1014 0:269 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw +721 1014 0:269 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw +723 1014 0:269 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw +726 1014 0:274 / /proc/acpi ro,relatime - tmpfs tmpfs ro +727 1014 0:270 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +728 1014 0:270 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +729 1014 0:270 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 +730 1021 0:275 / /sys/firmware ro,relatime - tmpfs tmpfs ro diff --git a/tests/cross_agent/fixtures/rum_client_config.json b/tests/cross_agent/fixtures/rum_client_config.json deleted file mode 100644 index 8f6e7cbbbe..0000000000 --- a/tests/cross_agent/fixtures/rum_client_config.json +++ /dev/null @@ -1,91 +0,0 @@ -[ - { - "testname":"all fields present", - - "apptime_milliseconds":5, - "queuetime_milliseconds":3, - "browser_monitoring.attributes.enabled":true, - "transaction_name":"WebTransaction/brink/of/glory", - "license_key":"0000111122223333444455556666777788889999", - "connect_reply": - { - "beacon":"my_beacon", - "browser_key":"my_browser_key", - "application_id":"my_application_id", - "error_beacon":"my_error_beacon", - "js_agent_file":"my_js_agent_file" - }, - "user_attributes":{"alpha":"beta"}, - "expected": - { - "beacon":"my_beacon", - "licenseKey":"my_browser_key", - "applicationID":"my_application_id", - "transactionName":"Z1VSZENQX0JTUUZbXF4fUkJYX1oeXVQdVV9fQkk=", - "queueTime":3, - "applicationTime":5, - "atts":"SxJFEgtKE1BeQlpTEQoSUlVFUBNMTw==", - "errorBeacon":"my_error_beacon", - "agent":"my_js_agent_file" - } - }, - { - "testname":"browser_monitoring.attributes.enabled disabled", - - "apptime_milliseconds":5, - "queuetime_milliseconds":3, - "browser_monitoring.attributes.enabled":false, - "transaction_name":"WebTransaction/brink/of/glory", - "license_key":"0000111122223333444455556666777788889999", - "connect_reply": - { - "beacon":"my_beacon", - "browser_key":"my_browser_key", - "application_id":"my_application_id", - "error_beacon":"my_error_beacon", - "js_agent_file":"my_js_agent_file" - }, - "user_attributes":{"alpha":"beta"}, - "expected": - { - "beacon":"my_beacon", - "licenseKey":"my_browser_key", - "applicationID":"my_application_id", - "transactionName":"Z1VSZENQX0JTUUZbXF4fUkJYX1oeXVQdVV9fQkk=", - "queueTime":3, - "applicationTime":5, - "atts":"", - "errorBeacon":"my_error_beacon", - "agent":"my_js_agent_file" - } - }, - { - "testname":"empty js_agent_file", - "apptime_milliseconds":5, - "queuetime_milliseconds":3, - "browser_monitoring.attributes.enabled":true, - "transaction_name":"WebTransaction/brink/of/glory", - "license_key":"0000111122223333444455556666777788889999", - "connect_reply": - { - "beacon":"my_beacon", - "browser_key":"my_browser_key", - "application_id":"my_application_id", - "error_beacon":"my_error_beacon", - "js_agent_file":"" - }, - "user_attributes":{"alpha":"beta"}, - "expected": - { - "beacon":"my_beacon", - "licenseKey":"my_browser_key", - "applicationID":"my_application_id", - "transactionName":"Z1VSZENQX0JTUUZbXF4fUkJYX1oeXVQdVV9fQkk=", - "queueTime":3, - "applicationTime":5, - "atts":"SxJFEgtKE1BeQlpTEQoSUlVFUBNMTw==", - "errorBeacon":"my_error_beacon", - "agent":"" - } - } -] diff --git a/tests/cross_agent/fixtures/rum_footer_insertion_location/close-body-in-comment.html b/tests/cross_agent/fixtures/rum_footer_insertion_location/close-body-in-comment.html deleted file mode 100644 index e32df24204..0000000000 --- a/tests/cross_agent/fixtures/rum_footer_insertion_location/close-body-in-comment.html +++ /dev/null @@ -1,26 +0,0 @@ - - - - - - Comment contains a close body tag - - -

The quick brown fox jumps over the lazy dog.

- - EXPECTED_RUM_FOOTER_LOCATION - diff --git a/tests/cross_agent/fixtures/rum_footer_insertion_location/dynamic-iframe.html b/tests/cross_agent/fixtures/rum_footer_insertion_location/dynamic-iframe.html deleted file mode 100644 index 5e1acc86b5..0000000000 --- a/tests/cross_agent/fixtures/rum_footer_insertion_location/dynamic-iframe.html +++ /dev/null @@ -1,35 +0,0 @@ - - - - - - Dynamic iframe Generation - - -

The quick brown fox jumps over the lazy dog.

- - - EXPECTED_RUM_FOOTER_LOCATION - diff --git a/tests/cross_agent/test_cat_map.py b/tests/cross_agent/test_cat_map.py index 6e7ac63d6d..ea011990a8 100644 --- a/tests/cross_agent/test_cat_map.py +++ b/tests/cross_agent/test_cat_map.py @@ -43,7 +43,6 @@ from newrelic.api.external_trace import ExternalTrace from newrelic.api.transaction import ( current_transaction, - get_browser_timing_footer, get_browser_timing_header, set_background_task, set_transaction_name, @@ -134,9 +133,9 @@ def target_wsgi_application(environ, start_response): set_background_task(True) set_transaction_name(txn_name[2], group=txn_name[1]) - text = "%s

RESPONSE

%s" + text = "%s

RESPONSE

" - output = (text % (get_browser_timing_header(), get_browser_timing_footer())).encode("UTF-8") + output = (text % get_browser_timing_header()).encode("UTF-8") response_headers = [("Content-type", "text/html; charset=utf-8"), ("Content-Length", str(len(output)))] start_response(status, response_headers) @@ -193,7 +192,6 @@ def test_cat_map( @override_application_settings(_custom_settings) @override_application_name(appName) def run_cat_test(): - if six.PY2: txn_name = transactionName.encode("UTF-8") guid = transactionGuid.encode("UTF-8") diff --git a/tests/cross_agent/test_docker.py b/tests/cross_agent/test_docker_container_id.py similarity index 50% rename from tests/cross_agent/test_docker.py rename to tests/cross_agent/test_docker_container_id.py index fd919932b2..a61c80ae62 100644 --- a/tests/cross_agent/test_docker.py +++ b/tests/cross_agent/test_docker_container_id.py @@ -13,39 +13,48 @@ # limitations under the License. import json -import mock import os + import pytest import newrelic.common.utilization as u CURRENT_DIR = os.path.dirname(os.path.realpath(__file__)) -DOCKER_FIXTURE = os.path.join(CURRENT_DIR, 'fixtures', 'docker_container_id') +DOCKER_FIXTURE = os.path.join(CURRENT_DIR, "fixtures", "docker_container_id") def _load_docker_test_attributes(): """Returns a list of docker test attributes in the form: - [(, ), ...] + [(, ), ...] """ docker_test_attributes = [] - test_cases = os.path.join(DOCKER_FIXTURE, 'cases.json') - with open(test_cases, 'r') as fh: + test_cases = os.path.join(DOCKER_FIXTURE, "cases.json") + with open(test_cases, "r") as fh: js = fh.read() json_list = json.loads(js) for json_record in json_list: - docker_test_attributes.append( - (json_record['filename'], json_record['containerId'])) + docker_test_attributes.append((json_record["filename"], json_record["containerId"])) return docker_test_attributes -@pytest.mark.parametrize('filename, containerId', - _load_docker_test_attributes()) -def test_docker_container_id(filename, containerId): +def mock_open(mock_file): + def _mock_open(filename, mode): + if filename == "/proc/self/mountinfo": + raise FileNotFoundError() + elif filename == "/proc/self/cgroup": + return mock_file + raise RuntimeError() + + return _mock_open + + +@pytest.mark.parametrize("filename, containerId", _load_docker_test_attributes()) +def test_docker_container_id_v1(monkeypatch, filename, containerId): path = os.path.join(DOCKER_FIXTURE, filename) - with open(path, 'rb') as f: - with mock.patch.object(u, 'open', create=True, return_value=f): - if containerId is not None: - assert u.DockerUtilization.detect() == {'id': containerId} - else: - assert u.DockerUtilization.detect() is None + with open(path, "rb") as f: + monkeypatch.setattr(u, "open", mock_open(f), raising=False) + if containerId is not None: + assert u.DockerUtilization.detect() == {"id": containerId} + else: + assert u.DockerUtilization.detect() is None diff --git a/tests/cross_agent/test_docker_container_id_v2.py b/tests/cross_agent/test_docker_container_id_v2.py new file mode 100644 index 0000000000..3c9397459b --- /dev/null +++ b/tests/cross_agent/test_docker_container_id_v2.py @@ -0,0 +1,60 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +import os + +import pytest + +import newrelic.common.utilization as u + +CURRENT_DIR = os.path.dirname(os.path.realpath(__file__)) +DOCKER_FIXTURE = os.path.join(CURRENT_DIR, "fixtures", "docker_container_id_v2") + + +def _load_docker_test_attributes(): + """Returns a list of docker test attributes in the form: + [(, ), ...] + + """ + docker_test_attributes = [] + test_cases = os.path.join(DOCKER_FIXTURE, "cases.json") + with open(test_cases, "r") as fh: + js = fh.read() + json_list = json.loads(js) + for json_record in json_list: + docker_test_attributes.append((json_record["filename"], json_record["containerId"])) + return docker_test_attributes + + +def mock_open(mock_file): + def _mock_open(filename, mode): + if filename == "/proc/self/cgroup": + raise FileNotFoundError() + elif filename == "/proc/self/mountinfo": + return mock_file + raise RuntimeError() + + return _mock_open + + +@pytest.mark.parametrize("filename, containerId", _load_docker_test_attributes()) +def test_docker_container_id_v2(monkeypatch, filename, containerId): + path = os.path.join(DOCKER_FIXTURE, filename) + with open(path, "rb") as f: + monkeypatch.setattr(u, "open", mock_open(f), raising=False) + if containerId is not None: + assert u.DockerUtilization.detect() == {"id": containerId} + else: + assert u.DockerUtilization.detect() is None diff --git a/tests/cross_agent/test_lambda_event_source.py b/tests/cross_agent/test_lambda_event_source.py index 511294cf6f..de796a6b0f 100644 --- a/tests/cross_agent/test_lambda_event_source.py +++ b/tests/cross_agent/test_lambda_event_source.py @@ -14,27 +14,30 @@ import json import os + import pytest +from testing_support.fixtures import override_application_settings +from testing_support.validators.validate_transaction_event_attributes import ( + validate_transaction_event_attributes, +) from newrelic.api.lambda_handler import lambda_handler -from testing_support.fixtures import override_application_settings -from testing_support.validators.validate_transaction_event_attributes import validate_transaction_event_attributes CURRENT_DIR = os.path.dirname(os.path.realpath(__file__)) -FIXTURE_DIR = os.path.normpath(os.path.join(CURRENT_DIR, 'fixtures')) -FIXTURE = os.path.join(FIXTURE_DIR, 'lambda_event_source.json') +FIXTURE_DIR = os.path.normpath(os.path.join(CURRENT_DIR, "fixtures")) +FIXTURE = os.path.join(FIXTURE_DIR, "lambda_event_source.json") tests = {} events = {} def _load_tests(): - with open(FIXTURE, 'r') as fh: + with open(FIXTURE, "r") as fh: for test in json.loads(fh.read()): - test_name = test.pop('name') + test_name = test.pop("name") - test_file = test_name + '.json' - path = os.path.join(FIXTURE_DIR, 'lambda_event_source', test_file) - with open(path, 'r') as fh: + test_file = test_name + ".json" + path = os.path.join(FIXTURE_DIR, "lambda_event_source", test_file) + with open(path, "r") as fh: events[test_name] = json.loads(fh.read()) tests[test_name] = test @@ -42,37 +45,39 @@ def _load_tests(): class Context(object): - aws_request_id = 'cookies' - invoked_function_arn = 'arn' - function_name = 'cats' - function_version = '$LATEST' + aws_request_id = "cookies" + invoked_function_arn = "arn" + function_name = "cats" + function_version = "$LATEST" memory_limit_in_mb = 128 @lambda_handler() def handler(event, context): return { - 'statusCode': '200', - 'body': '{}', - 'headers': { - 'Content-Type': 'application/json', - 'Content-Length': 2, + "statusCode": "200", + "body": "{}", + "headers": { + "Content-Type": "application/json", + "Content-Length": 2, }, } -@pytest.mark.parametrize('test_name', _load_tests()) +# The lambda_hander has been deprecated for 3+ years +@pytest.mark.skip(reason="The lambda_handler has been deprecated") +@pytest.mark.parametrize("test_name", _load_tests()) def test_lambda_event_source(test_name): - _exact = {'user': {}, 'intrinsic': {}, 'agent': {}} + _exact = {"user": {}, "intrinsic": {}, "agent": {}} - expected_arn = tests[test_name].get('aws.lambda.eventSource.arn', None) + expected_arn = tests[test_name].get("aws.lambda.eventSource.arn", None) if expected_arn: - _exact['agent']['aws.lambda.eventSource.arn'] = expected_arn + _exact["agent"]["aws.lambda.eventSource.arn"] = expected_arn else: pytest.skip("Nothing to test!") return - @override_application_settings({'attributes.include': ['aws.*']}) + @override_application_settings({"attributes.include": ["aws.*"]}) @validate_transaction_event_attributes({}, exact_attrs=_exact) def _test(): handler(events[test_name], Context) diff --git a/tests/cross_agent/test_rum_client_config.py b/tests/cross_agent/test_rum_client_config.py deleted file mode 100644 index 5b8da4b84c..0000000000 --- a/tests/cross_agent/test_rum_client_config.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os - -import pytest -import webtest -from testing_support.fixtures import override_application_settings - -from newrelic.api.transaction import ( - add_custom_attribute, - get_browser_timing_footer, - set_transaction_name, -) -from newrelic.api.wsgi_application import wsgi_application - -CURRENT_DIR = os.path.dirname(os.path.realpath(__file__)) -FIXTURE = os.path.join(CURRENT_DIR, "fixtures", "rum_client_config.json") - -def _load_tests(): - with open(FIXTURE, "r") as fh: - js = fh.read() - return json.loads(js) - - -fields = [ - "testname", - "apptime_milliseconds", - "queuetime_milliseconds", - "browser_monitoring.attributes.enabled", - "transaction_name", - "license_key", - "connect_reply", - "user_attributes", - "expected", -] - -# Replace . as not a valid character in python argument names - -field_names = ",".join([f.replace(".", "_") for f in fields]) - - -def _parametrize_test(test): - return tuple([test.get(f, None) for f in fields]) - - -_rum_tests = [_parametrize_test(t) for t in _load_tests()] - - -@wsgi_application() -def target_wsgi_application(environ, start_response): - status = "200 OK" - - txn_name = environ.get("txn_name") - set_transaction_name(txn_name, group="") - - user_attrs = json.loads(environ.get("user_attrs")) - for key, value in user_attrs.items(): - add_custom_attribute(key, value) - - text = "%s

RESPONSE

" - - output = (text % get_browser_timing_footer()).encode("UTF-8") - - response_headers = [("Content-Type", "text/html; charset=utf-8"), ("Content-Length", str(len(output)))] - start_response(status, response_headers) - - return [output] - - -target_application = webtest.TestApp(target_wsgi_application) - - -@pytest.mark.parametrize(field_names, _rum_tests) -def test_browser_montioring( - testname, - apptime_milliseconds, - queuetime_milliseconds, - browser_monitoring_attributes_enabled, - transaction_name, - license_key, - connect_reply, - user_attributes, - expected, -): - - settings = { - "browser_monitoring.attributes.enabled": browser_monitoring_attributes_enabled, - "license_key": license_key, - "js_agent_loader": "", - } - settings.update(connect_reply) - - @override_application_settings(settings) - def run_browser_data_test(): - - response = target_application.get( - "/", extra_environ={"txn_name": str(transaction_name), "user_attrs": json.dumps(user_attributes)} - ) - - # We actually put the "footer" in the header, the first script is the - # agent "header", the second one is where the data lives, hence the [1]. - - footer = response.html.html.head.find_all("script")[1] - footer_data = json.loads(footer.string.split("NREUM.info=")[1]) - - # Not feasible to test the time metric values in testing - - expected.pop("queueTime") - expected.pop("applicationTime") - assert footer_data["applicationTime"] >= 0 - assert footer_data["queueTime"] >= 0 - - # Python always prepends stuff to the transaction name, so this - # doesn't match the obscured value. - - expected.pop("transactionName") - - # Check that all other values are correct - - for key, value in expected.items(): - - # If there are no attributes, the spec allows us to omit the - # 'atts' field altogether, so we do. But, the cross agent tests - # don't omit it, so we need to special case 'atts' when we compare - # to 'expected'. - - if key == "atts" and value == "": - assert key not in footer_data - else: - assert footer_data[key] == value - - run_browser_data_test() diff --git a/tests/datastore_asyncpg/test_multiple_dbs.py b/tests/datastore_asyncpg/test_multiple_dbs.py index a917a9e83d..9d7a3de95e 100644 --- a/tests/datastore_asyncpg/test_multiple_dbs.py +++ b/tests/datastore_asyncpg/test_multiple_dbs.py @@ -12,20 +12,21 @@ # See the License for the specific language governing permissions and # limitations under the License. -import asyncio - import asyncpg import pytest from testing_support.db_settings import postgresql_settings from testing_support.fixtures import override_application_settings -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics from testing_support.util import instance_hostname +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple DB_MULTIPLE_SETTINGS = postgresql_settings() -ASYNCPG_VERSION = tuple(int(x) for x in getattr(asyncpg, "__version__", "0.0").split(".")[:2]) +ASYNCPG_VERSION = get_package_version_tuple("asyncpg") if ASYNCPG_VERSION < (0, 11): CONNECT_METRICS = [] @@ -100,7 +101,6 @@ async def _exercise_db(): - postgresql1 = DB_MULTIPLE_SETTINGS[0] postgresql2 = DB_MULTIPLE_SETTINGS[1] @@ -145,6 +145,7 @@ async def _exercise_db(): ) @background_task() def test_multiple_databases_enable_instance(event_loop): + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(_exercise_db()) @@ -161,4 +162,5 @@ def test_multiple_databases_enable_instance(event_loop): ) @background_task() def test_multiple_databases_disable_instance(event_loop): + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(_exercise_db()) diff --git a/tests/datastore_asyncpg/test_query.py b/tests/datastore_asyncpg/test_query.py index 838ced61da..6deb7ca9a8 100644 --- a/tests/datastore_asyncpg/test_query.py +++ b/tests/datastore_asyncpg/test_query.py @@ -27,12 +27,13 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple DB_SETTINGS = postgresql_settings()[0] PG_PREFIX = "Datastore/operation/Postgres/" -ASYNCPG_VERSION = tuple(int(x) for x in getattr(asyncpg, "__version__", "0.0").split(".")[:2]) +ASYNCPG_VERSION = get_package_version_tuple("asyncpg") if ASYNCPG_VERSION < (0, 11): CONNECT_METRICS = () @@ -65,6 +66,7 @@ def conn(event_loop): @background_task(name="test_single") @pytest.mark.parametrize("method", ("execute",)) def test_single(event_loop, method, conn): + assert ASYNCPG_VERSION is not None _method = getattr(conn, method) event_loop.run_until_complete(_method("""SELECT 0""")) @@ -81,6 +83,7 @@ def test_single(event_loop, method, conn): @background_task(name="test_prepared_single") @pytest.mark.parametrize("method", ("fetch", "fetchrow", "fetchval")) def test_prepared_single(event_loop, method, conn): + assert ASYNCPG_VERSION is not None _method = getattr(conn, method) event_loop.run_until_complete(_method("""SELECT 0""")) @@ -93,6 +96,7 @@ def test_prepared_single(event_loop, method, conn): ) @background_task(name="test_prepare") def test_prepare(event_loop, conn): + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(conn.prepare("""SELECT 0""")) @@ -125,6 +129,7 @@ async def amain(): # 2 statements await conn.copy_from_query("""SELECT 0""", output=BytesIO()) + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(amain()) @@ -139,6 +144,7 @@ async def amain(): ) @background_task(name="test_select_many") def test_select_many(event_loop, conn): + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(conn.executemany("""SELECT $1::int""", ((1,), (2,)))) @@ -158,6 +164,7 @@ async def amain(): async with conn.transaction(): await conn.execute("""SELECT 0""") + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(amain()) @@ -181,6 +188,7 @@ async def amain(): await conn.cursor("SELECT 0") + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(amain()) @@ -200,6 +208,7 @@ async def amain(): ) @background_task(name="test_unix_socket_connect") def test_unix_socket_connect(event_loop): + assert ASYNCPG_VERSION is not None with pytest.raises(OSError): event_loop.run_until_complete(asyncpg.connect("postgres://?host=/.s.PGSQL.THIS_FILE_BETTER_NOT_EXIST")) @@ -233,4 +242,5 @@ async def amain(): finally: await pool.close() + assert ASYNCPG_VERSION is not None event_loop.run_until_complete(amain()) diff --git a/tests/datastore_mysql/test_database.py b/tests/datastore_mysql/test_database.py index 8f86419039..d14e11a41f 100644 --- a/tests/datastore_mysql/test_database.py +++ b/tests/datastore_mysql/test_database.py @@ -23,13 +23,15 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple DB_SETTINGS = mysql_settings() DB_SETTINGS = DB_SETTINGS[0] DB_NAMESPACE = DB_SETTINGS["namespace"] DB_PROCEDURE = "hello_" + DB_NAMESPACE -mysql_version = tuple(int(x) for x in mysql.connector.__version__.split(".")[:3]) +mysql_version = get_package_version_tuple("mysql.connector") + if mysql_version >= (8, 0, 30): _connector_metric_name = "Function/mysql.connector.pooling:connect" else: @@ -71,6 +73,12 @@ ] +@validate_transaction_metrics( + "test_database:test_execute_via_cursor", + scoped_metrics=_test_execute_via_cursor_scoped_metrics, + rollup_metrics=_test_execute_via_cursor_rollup_metrics, + background_task=True, +) @validate_transaction_metrics( "test_database:test_execute_via_cursor", scoped_metrics=_test_execute_via_cursor_scoped_metrics, @@ -80,7 +88,7 @@ @validate_database_trace_inputs(sql_parameters_type=dict) @background_task() def test_execute_via_cursor(table_name): - + assert mysql_version is not None connection = mysql.connector.connect( db=DB_SETTINGS["name"], user=DB_SETTINGS["user"], @@ -97,7 +105,7 @@ def test_execute_via_cursor(table_name): cursor.executemany( """insert into `%s` """ % table_name + """values (%(a)s, %(b)s, %(c)s)""", - [dict(a=1, b=1.0, c="1.0"), dict(a=2, b=2.2, c="2.2"), dict(a=3, b=3.3, c="3.3")], + [{"a": 1, "b": 1.0, "c": "1.0"}, {"a": 2, "b": 2.2, "c": "2.2"}, {"a": 3, "b": 3.3, "c": "3.3"}], ) cursor.execute("""select * from %s""" % table_name) @@ -107,7 +115,7 @@ def test_execute_via_cursor(table_name): cursor.execute( """update `%s` """ % table_name + """set a=%(a)s, b=%(b)s, c=%(c)s where a=%(old_a)s""", - dict(a=4, b=4.0, c="4.0", old_a=1), + {"a": 4, "b": 4.0, "c": "4.0", "old_a": 1}, ) cursor.execute("""delete from `%s` where a=2""" % table_name) @@ -173,7 +181,7 @@ def test_execute_via_cursor(table_name): @validate_database_trace_inputs(sql_parameters_type=dict) @background_task() def test_connect_using_alias(table_name): - + assert mysql_version is not None connection = mysql.connector.connect( db=DB_SETTINGS["name"], user=DB_SETTINGS["user"], @@ -190,7 +198,7 @@ def test_connect_using_alias(table_name): cursor.executemany( """insert into `%s` """ % table_name + """values (%(a)s, %(b)s, %(c)s)""", - [dict(a=1, b=1.0, c="1.0"), dict(a=2, b=2.2, c="2.2"), dict(a=3, b=3.3, c="3.3")], + [{"a": 1, "b": 1.0, "c": "1.0"}, {"a": 2, "b": 2.2, "c": "2.2"}, {"a": 3, "b": 3.3, "c": "3.3"}], ) cursor.execute("""select * from %s""" % table_name) @@ -200,7 +208,7 @@ def test_connect_using_alias(table_name): cursor.execute( """update `%s` """ % table_name + """set a=%(a)s, b=%(b)s, c=%(c)s where a=%(old_a)s""", - dict(a=4, b=4.0, c="4.0", old_a=1), + {"a": 4, "b": 4.0, "c": "4.0", "old_a": 1}, ) cursor.execute("""delete from `%s` where a=2""" % table_name) diff --git a/tests/datastore_psycopg2cffi/test_database.py b/tests/datastore_psycopg2cffi/test_database.py index 939c5cabcb..0b3ff87d3d 100644 --- a/tests/datastore_psycopg2cffi/test_database.py +++ b/tests/datastore_psycopg2cffi/test_database.py @@ -32,6 +32,7 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple DB_SETTINGS = postgresql_settings()[0] @@ -91,7 +92,6 @@ def test_execute_via_cursor(): host=DB_SETTINGS["host"], port=DB_SETTINGS["port"], ) as connection: - cursor = connection.cursor() psycopg2cffi.extensions.register_type(psycopg2cffi.extensions.UNICODE) @@ -161,7 +161,6 @@ def test_rollback_on_exception(): host=DB_SETTINGS["host"], port=DB_SETTINGS["port"], ): - raise RuntimeError("error") except RuntimeError: pass @@ -202,11 +201,11 @@ def test_rollback_on_exception(): @validate_transaction_errors(errors=[]) @background_task() def test_async_mode(): - wait = psycopg2cffi.extras.wait_select kwargs = {} - version = tuple(int(_) for _ in psycopg2cffi.__version__.split(".")) + version = get_package_version_tuple("psycopg2cffi") + assert version is not None if version >= (2, 8): kwargs["async_"] = 1 else: diff --git a/tests/external_botocore/test_boto3_iam.py b/tests/external_botocore/test_boto3_iam.py index 3d672f3751..1bd05669a4 100644 --- a/tests/external_botocore/test_boto3_iam.py +++ b/tests/external_botocore/test_boto3_iam.py @@ -27,8 +27,9 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): diff --git a/tests/external_botocore/test_boto3_s3.py b/tests/external_botocore/test_boto3_s3.py index b6299d9f6e..00972c25b1 100644 --- a/tests/external_botocore/test_boto3_s3.py +++ b/tests/external_botocore/test_boto3_s3.py @@ -25,8 +25,9 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): diff --git a/tests/external_botocore/test_boto3_sns.py b/tests/external_botocore/test_boto3_sns.py index 5e6c7c4b4e..307aeed84a 100644 --- a/tests/external_botocore/test_boto3_sns.py +++ b/tests/external_botocore/test_boto3_sns.py @@ -27,8 +27,9 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): diff --git a/tests/external_botocore/test_botocore_dynamodb.py b/tests/external_botocore/test_botocore_dynamodb.py index 932fb1743a..8c43ed5c45 100644 --- a/tests/external_botocore/test_botocore_dynamodb.py +++ b/tests/external_botocore/test_botocore_dynamodb.py @@ -27,8 +27,9 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): diff --git a/tests/external_botocore/test_botocore_ec2.py b/tests/external_botocore/test_botocore_ec2.py index 3cb83e3185..e43744f6c8 100644 --- a/tests/external_botocore/test_botocore_ec2.py +++ b/tests/external_botocore/test_botocore_ec2.py @@ -27,8 +27,9 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): diff --git a/tests/external_botocore/test_botocore_s3.py b/tests/external_botocore/test_botocore_s3.py index ea0c225390..5bd2feab1c 100644 --- a/tests/external_botocore/test_botocore_s3.py +++ b/tests/external_botocore/test_botocore_s3.py @@ -25,9 +25,10 @@ ) from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) -BOTOCORE_VERSION = tuple(int(v) for v in botocore.__version__.split(".")[:3]) +MOTO_VERSION = MOTO_VERSION = get_package_version_tuple("moto") +BOTOCORE_VERSION = get_package_version_tuple("botocore") # patch earlier versions of moto to support py37 diff --git a/tests/external_botocore/test_botocore_sqs.py b/tests/external_botocore/test_botocore_sqs.py index 63f15801b5..6a96614e54 100644 --- a/tests/external_botocore/test_botocore_sqs.py +++ b/tests/external_botocore/test_botocore_sqs.py @@ -25,9 +25,10 @@ ) from newrelic.api.background_task import background_task -from newrelic.common.package_version_utils import get_package_version +from newrelic.common.package_version_utils import get_package_version_tuple -MOTO_VERSION = tuple(int(v) for v in moto.__version__.split(".")[:3]) +MOTO_VERSION = get_package_version_tuple("moto") +BOTOCORE_VERSION = get_package_version_tuple("botocore") # patch earlier versions of moto to support py37 if sys.version_info >= (3, 7) and MOTO_VERSION <= (1, 3, 1): @@ -36,8 +37,8 @@ moto.packages.responses.responses.re._pattern_type = re.Pattern url = "sqs.us-east-1.amazonaws.com" -botocore_version = tuple([int(n) for n in get_package_version("botocore").split(".")]) -if botocore_version < (1, 29, 0): + +if BOTOCORE_VERSION < (1, 29, 0): url = "queue.amazonaws.com" AWS_ACCESS_KEY_ID = "AAAAAAAAAAAACCESSKEY" diff --git a/tests/external_requests/test_requests.py b/tests/external_requests/test_requests.py index f6f4506e51..d25d203c08 100644 --- a/tests/external_requests/test_requests.py +++ b/tests/external_requests/test_requests.py @@ -30,13 +30,19 @@ from testing_support.validators.validate_external_node_params import ( validate_external_node_params, ) -from testing_support.validators.validate_transaction_errors import validate_transaction_errors -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple def get_requests_version(): - return tuple(map(int, requests.__version__.split(".")[:2])) + return get_package_version_tuple("requests") @pytest.fixture(scope="session") @@ -89,7 +95,7 @@ def test_https_request_get(server, metrics): @background_task(name="test_requests:test_https_request_get") def _test(): try: - requests.get("https://localhost:%d/" % server.port, verify=False) + requests.get("https://localhost:%d/" % server.port, verify=False) # nosec except Exception: pass diff --git a/tests/external_urllib3/test_urllib3.py b/tests/external_urllib3/test_urllib3.py index 68e15d4634..92a2e93df0 100644 --- a/tests/external_urllib3/test_urllib3.py +++ b/tests/external_urllib3/test_urllib3.py @@ -25,20 +25,22 @@ cache_outgoing_headers, insert_incoming_headers, ) -from testing_support.fixtures import ( - cat_enabled, - override_application_settings, -) -from testing_support.util import version2tuple +from testing_support.fixtures import cat_enabled, override_application_settings from testing_support.validators.validate_cross_process_headers import ( validate_cross_process_headers, ) from testing_support.validators.validate_external_node_params import ( validate_external_node_params, ) -from testing_support.validators.validate_transaction_errors import validate_transaction_errors -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + from newrelic.api.background_task import background_task +from newrelic.common.package_version_utils import get_package_version_tuple @pytest.fixture(scope="session") @@ -185,7 +187,7 @@ def _test(): # HTTPConnection class. Previously the httplib/http.client HTTPConnection class # was used. We test httplib in a different test directory so we skip this test. @pytest.mark.skipif( - version2tuple(urllib3.__version__) < (1, 8), reason="urllib3.connection.HTTPConnection added in 1.8" + get_package_version_tuple("urllib3") < (1, 8), reason="urllib3.connection.HTTPConnection added in 1.8" ) def test_HTTPConnection_port_included(server): scoped = [("External/localhost:%d/urllib3/" % server.port, 1)] diff --git a/tests/framework_bottle/test_application.py b/tests/framework_bottle/test_application.py index 28619d5eb5..e4e313880e 100644 --- a/tests/framework_bottle/test_application.py +++ b/tests/framework_bottle/test_application.py @@ -12,218 +12,234 @@ # See the License for the specific language governing permissions and # limitations under the License. -import pytest import base64 +import pytest from testing_support.fixtures import ( + override_application_settings, override_ignore_status_codes, - override_application_settings) -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics +) +from testing_support.validators.validate_code_level_metrics import ( + validate_code_level_metrics, +) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.common.package_version_utils import get_package_version_tuple from newrelic.packages import six -from testing_support.validators.validate_code_level_metrics import validate_code_level_metrics -from testing_support.validators.validate_transaction_errors import validate_transaction_errors - -import webtest -from bottle import __version__ as version - -version = [int(x) for x in version.split('-')[0].split('.')] +version = list(get_package_version_tuple("bottle")) if len(version) == 2: version.append(0) version = tuple(version) +assert version > (0, 1), "version information not found" -requires_auth_basic = pytest.mark.skipif(version < (0, 9, 0), - reason="Bottle only added auth_basic in 0.9.0.") -requires_plugins = pytest.mark.skipif(version < (0, 9, 0), - reason="Bottle only added auth_basic in 0.9.0.") +requires_auth_basic = pytest.mark.skipif(version < (0, 9, 0), reason="Bottle only added auth_basic in 0.9.0.") +requires_plugins = pytest.mark.skipif(version < (0, 9, 0), reason="Bottle only added auth_basic in 0.9.0.") _test_application_index_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:index_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:index_page", 1), +] if version >= (0, 9, 0): - _test_application_index_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_index_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_index_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_index_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) + +_test_application_index_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] -_test_application_index_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] @validate_code_level_metrics("_target_application", "index_page") @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_target_application:index_page', - scoped_metrics=_test_application_index_scoped_metrics, - custom_metrics=_test_application_index_custom_metrics) +@validate_transaction_metrics( + "_target_application:index_page", + scoped_metrics=_test_application_index_scoped_metrics, + custom_metrics=_test_application_index_custom_metrics, +) def test_application_index(target_application): - response = target_application.get('/index') - response.mustcontain('INDEX RESPONSE') + response = target_application.get("/index") + response.mustcontain("INDEX RESPONSE") + _test_application_error_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:error_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:error_page", 1), +] if version >= (0, 9, 0): - _test_application_error_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_error_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_error_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_error_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) -_test_application_error_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] +_test_application_error_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] if six.PY3: - _test_application_error_errors = ['builtins:RuntimeError'] + _test_application_error_errors = ["builtins:RuntimeError"] else: - _test_application_error_errors = ['exceptions:RuntimeError'] + _test_application_error_errors = ["exceptions:RuntimeError"] + @validate_code_level_metrics("_target_application", "error_page") @validate_transaction_errors(errors=_test_application_error_errors) -@validate_transaction_metrics('_target_application:error_page', - scoped_metrics=_test_application_error_scoped_metrics, - custom_metrics=_test_application_error_custom_metrics) +@validate_transaction_metrics( + "_target_application:error_page", + scoped_metrics=_test_application_error_scoped_metrics, + custom_metrics=_test_application_error_custom_metrics, +) def test_application_error(target_application): - response = target_application.get('/error', status=500, expect_errors=True) + response = target_application.get("/error", status=500, expect_errors=True) + _test_application_not_found_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:error404_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:error404_page", 1), +] if version >= (0, 9, 0): - _test_application_not_found_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_not_found_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_not_found_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_not_found_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) + +_test_application_not_found_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] -_test_application_not_found_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] @validate_code_level_metrics("_target_application", "error404_page") @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_target_application:error404_page', - scoped_metrics=_test_application_not_found_scoped_metrics, - custom_metrics=_test_application_not_found_custom_metrics) +@validate_transaction_metrics( + "_target_application:error404_page", + scoped_metrics=_test_application_not_found_scoped_metrics, + custom_metrics=_test_application_not_found_custom_metrics, +) def test_application_not_found(target_application): - response = target_application.get('/missing', status=404) - response.mustcontain('NOT FOUND') + response = target_application.get("/missing", status=404) + response.mustcontain("NOT FOUND") + _test_application_auth_basic_fail_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:auth_basic_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:auth_basic_page", 1), +] if version >= (0, 9, 0): - _test_application_auth_basic_fail_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_auth_basic_fail_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_auth_basic_fail_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_auth_basic_fail_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) + +_test_application_auth_basic_fail_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] -_test_application_auth_basic_fail_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] @requires_auth_basic @validate_code_level_metrics("_target_application", "auth_basic_page") @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_target_application:auth_basic_page', - scoped_metrics=_test_application_auth_basic_fail_scoped_metrics, - custom_metrics=_test_application_auth_basic_fail_custom_metrics) +@validate_transaction_metrics( + "_target_application:auth_basic_page", + scoped_metrics=_test_application_auth_basic_fail_scoped_metrics, + custom_metrics=_test_application_auth_basic_fail_custom_metrics, +) def test_application_auth_basic_fail(target_application): - response = target_application.get('/auth', status=401) + response = target_application.get("/auth", status=401) + _test_application_auth_basic_okay_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:auth_basic_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:auth_basic_page", 1), +] if version >= (0, 9, 0): - _test_application_auth_basic_okay_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_auth_basic_okay_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_auth_basic_okay_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_auth_basic_okay_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) + +_test_application_auth_basic_okay_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] -_test_application_auth_basic_okay_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] @requires_auth_basic @validate_code_level_metrics("_target_application", "auth_basic_page") @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_target_application:auth_basic_page', - scoped_metrics=_test_application_auth_basic_okay_scoped_metrics, - custom_metrics=_test_application_auth_basic_okay_custom_metrics) +@validate_transaction_metrics( + "_target_application:auth_basic_page", + scoped_metrics=_test_application_auth_basic_okay_scoped_metrics, + custom_metrics=_test_application_auth_basic_okay_custom_metrics, +) def test_application_auth_basic_okay(target_application): - authorization_value = base64.b64encode(b'user:password') + authorization_value = base64.b64encode(b"user:password") if six.PY3: - authorization_value = authorization_value.decode('Latin-1') - environ = { 'HTTP_AUTHORIZATION': 'Basic ' + authorization_value } - response = target_application.get('/auth', extra_environ=environ) - response.mustcontain('AUTH OKAY') + authorization_value = authorization_value.decode("Latin-1") + environ = {"HTTP_AUTHORIZATION": "Basic " + authorization_value} + response = target_application.get("/auth", extra_environ=environ) + response.mustcontain("AUTH OKAY") + _test_application_plugin_error_scoped_metrics = [ - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_target_application:plugin_error_page', 1)] + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_target_application:plugin_error_page", 1), +] if version >= (0, 9, 0): - _test_application_plugin_error_scoped_metrics.extend([ - ('Function/bottle:Bottle.wsgi', 1)]) + _test_application_plugin_error_scoped_metrics.extend([("Function/bottle:Bottle.wsgi", 1)]) else: - _test_application_plugin_error_scoped_metrics.extend([ - ('Function/bottle:Bottle.__call__', 1)]) + _test_application_plugin_error_scoped_metrics.extend([("Function/bottle:Bottle.__call__", 1)]) + +_test_application_plugin_error_custom_metrics = [("Python/Framework/Bottle/%s.%s.%s" % version, 1)] -_test_application_plugin_error_custom_metrics = [ - ('Python/Framework/Bottle/%s.%s.%s' % version, 1)] @requires_plugins @validate_code_level_metrics("_target_application", "plugin_error_page") @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_target_application:plugin_error_page', - scoped_metrics=_test_application_plugin_error_scoped_metrics, - custom_metrics=_test_application_plugin_error_custom_metrics) +@validate_transaction_metrics( + "_target_application:plugin_error_page", + scoped_metrics=_test_application_plugin_error_scoped_metrics, + custom_metrics=_test_application_plugin_error_custom_metrics, +) @override_ignore_status_codes([403]) def test_application_plugin_error_ignore(target_application): - response = target_application.get('/plugin_error', status=403, - expect_errors=True) + response = target_application.get("/plugin_error", status=403, expect_errors=True) + @requires_plugins @validate_code_level_metrics("_target_application", "plugin_error_page") -@validate_transaction_errors(errors=['bottle:HTTPError']) -@validate_transaction_metrics('_target_application:plugin_error_page', - scoped_metrics=_test_application_plugin_error_scoped_metrics, - custom_metrics=_test_application_plugin_error_custom_metrics) +@validate_transaction_errors(errors=["bottle:HTTPError"]) +@validate_transaction_metrics( + "_target_application:plugin_error_page", + scoped_metrics=_test_application_plugin_error_scoped_metrics, + custom_metrics=_test_application_plugin_error_custom_metrics, +) def test_application_plugin_error_capture(target_application): - import newrelic.agent - response = target_application.get('/plugin_error', status=403, - expect_errors=True) + response = target_application.get("/plugin_error", status=403, expect_errors=True) + _test_html_insertion_settings = { - 'browser_monitoring.enabled': True, - 'browser_monitoring.auto_instrument': True, - 'js_agent_loader': u'', + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": True, + "js_agent_loader": "", } + @override_application_settings(_test_html_insertion_settings) def test_html_insertion(target_application): - response = target_application.get('/html_insertion') + response = target_application.get("/html_insertion") # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. - - response.mustcontain('NREUM HEADER', 'NREUM.info') + # header added by the agent. + response.mustcontain("NREUM HEADER", "NREUM.info") diff --git a/tests/framework_cherrypy/test_application.py b/tests/framework_cherrypy/test_application.py index 39f8b5c16d..dd4595c0b8 100644 --- a/tests/framework_cherrypy/test_application.py +++ b/tests/framework_cherrypy/test_application.py @@ -12,31 +12,33 @@ # See the License for the specific language governing permissions and # limitations under the License. +import cherrypy import pytest import webtest - -from newrelic.packages import six - from testing_support.fixtures import ( - override_application_settings, - override_ignore_status_codes) -from testing_support.validators.validate_code_level_metrics import validate_code_level_metrics -from testing_support.validators.validate_transaction_errors import validate_transaction_errors + override_application_settings, + override_ignore_status_codes, +) +from testing_support.validators.validate_code_level_metrics import ( + validate_code_level_metrics, +) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) -import cherrypy +from newrelic.packages import six -CHERRYPY_VERSION = tuple(int(v) for v in cherrypy.__version__.split('.')) +CHERRYPY_VERSION = tuple(int(v) for v in cherrypy.__version__.split(".")) class Application(object): - @cherrypy.expose def index(self): - return 'INDEX RESPONSE' + return "INDEX RESPONSE" @cherrypy.expose def error(self): - raise RuntimeError('error') + raise RuntimeError("error") @cherrypy.expose def not_found(self): @@ -48,35 +50,37 @@ def not_found_as_http_error(self): @cherrypy.expose def not_found_as_str_http_error(self): - raise cherrypy.HTTPError('404 Not Found') + raise cherrypy.HTTPError("404 Not Found") @cherrypy.expose def bad_http_error(self): # this will raise HTTPError with status code 500 because 10 is not a # valid status code - raise cherrypy.HTTPError('10 Invalid status code') + raise cherrypy.HTTPError("10 Invalid status code") @cherrypy.expose def internal_redirect(self): - raise cherrypy.InternalRedirect('/') + raise cherrypy.InternalRedirect("/") @cherrypy.expose def external_redirect(self): - raise cherrypy.HTTPRedirect('/') + raise cherrypy.HTTPRedirect("/") @cherrypy.expose def upload_files(self, files): - return 'UPLOAD FILES RESPONSE' + return "UPLOAD FILES RESPONSE" @cherrypy.expose def encode_multipart(self, field, files): - return 'ENCODE MULTIPART RESPONSE' + return "ENCODE MULTIPART RESPONSE" @cherrypy.expose def html_insertion(self): - return ('Some header' - '

My First Heading

My first paragraph.

' - '') + return ( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) application = cherrypy.Application(Application()) @@ -86,99 +90,97 @@ def html_insertion(self): @validate_code_level_metrics("test_application.Application", "index") @validate_transaction_errors(errors=[]) def test_application_index(): - response = test_application.get('') - response.mustcontain('INDEX RESPONSE') + response = test_application.get("") + response.mustcontain("INDEX RESPONSE") @validate_transaction_errors(errors=[]) def test_application_index_agent_disabled(): - environ = {'newrelic.enabled': False} - response = test_application.get('', extra_environ=environ) - response.mustcontain('INDEX RESPONSE') + environ = {"newrelic.enabled": False} + response = test_application.get("", extra_environ=environ) + response.mustcontain("INDEX RESPONSE") @validate_transaction_errors(errors=[]) def test_application_missing(): - test_application.get('/missing', status=404) + test_application.get("/missing", status=404) if six.PY3: - _test_application_unexpected_exception_errors = ['builtins:RuntimeError'] + _test_application_unexpected_exception_errors = ["builtins:RuntimeError"] else: - _test_application_unexpected_exception_errors = ['exceptions:RuntimeError'] + _test_application_unexpected_exception_errors = ["exceptions:RuntimeError"] -@validate_transaction_errors( - errors=_test_application_unexpected_exception_errors) +@validate_transaction_errors(errors=_test_application_unexpected_exception_errors) def test_application_unexpected_exception(): - test_application.get('/error', status=500) + test_application.get("/error", status=500) @validate_transaction_errors(errors=[]) def test_application_not_found(): - test_application.get('/not_found', status=404) + test_application.get("/not_found", status=404) @validate_transaction_errors(errors=[]) def test_application_not_found_as_http_error(): - test_application.get('/not_found_as_http_error', status=404) + test_application.get("/not_found_as_http_error", status=404) @validate_transaction_errors(errors=[]) def test_application_internal_redirect(): - response = test_application.get('/internal_redirect') - response.mustcontain('INDEX RESPONSE') + response = test_application.get("/internal_redirect") + response.mustcontain("INDEX RESPONSE") @validate_transaction_errors(errors=[]) def test_application_external_redirect(): - test_application.get('/external_redirect', status=302) + test_application.get("/external_redirect", status=302) @validate_transaction_errors(errors=[]) def test_application_upload_files(): - test_application.post('/upload_files', upload_files=[('files', __file__)]) + test_application.post("/upload_files", upload_files=[("files", __file__)]) @validate_transaction_errors(errors=[]) def test_application_encode_multipart(): - content_type, body = test_application.encode_multipart( - params=[('field', 'value')], files=[('files', __file__)]) - test_application.request('/encode_multipart', method='POST', - content_type=content_type, body=body) + content_type, body = test_application.encode_multipart(params=[("field", "value")], files=[("files", __file__)]) + test_application.request("/encode_multipart", method="POST", content_type=content_type, body=body) _test_html_insertion_settings = { - 'browser_monitoring.enabled': True, - 'browser_monitoring.auto_instrument': True, - 'js_agent_loader': u'', + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": True, + "js_agent_loader": "", } @override_application_settings(_test_html_insertion_settings) def test_html_insertion(): - response = test_application.get('/html_insertion') + response = test_application.get("/html_insertion") # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. - response.mustcontain('NREUM HEADER', 'NREUM.info') + response.mustcontain("NREUM HEADER", "NREUM.info") -_error_endpoints = ['/not_found_as_http_error'] +_error_endpoints = ["/not_found_as_http_error"] if CHERRYPY_VERSION >= (3, 2): - _error_endpoints.extend(['/not_found_as_str_http_error', - '/bad_http_error']) + _error_endpoints.extend(["/not_found_as_str_http_error", "/bad_http_error"]) -@pytest.mark.parametrize('endpoint', _error_endpoints) -@pytest.mark.parametrize('ignore_overrides,expected_errors', [ - ([], ['cherrypy._cperror:HTTPError']), - ([404, 500], []), -]) +@pytest.mark.parametrize("endpoint", _error_endpoints) +@pytest.mark.parametrize( + "ignore_overrides,expected_errors", + [ + ([], ["cherrypy._cperror:HTTPError"]), + ([404, 500], []), + ], +) def test_ignore_status_code(endpoint, ignore_overrides, expected_errors): - @validate_transaction_errors(errors=expected_errors) @override_ignore_status_codes(ignore_overrides) def _test(): @@ -189,5 +191,5 @@ def _test(): @validate_transaction_errors(errors=[]) def test_ignore_status_unexpected_param(): - response = test_application.get('/?arg=1', status=404) - response.mustcontain(no=['INDEX RESPONSE']) + response = test_application.get("/?arg=1", status=404) + response.mustcontain(no=["INDEX RESPONSE"]) diff --git a/tests/framework_django/templates/main.html b/tests/framework_django/templates/main.html index bcf5afda39..5de5a534a3 100644 --- a/tests/framework_django/templates/main.html +++ b/tests/framework_django/templates/main.html @@ -26,6 +26,5 @@

My First Heading

My first paragraph.

{% show_results %} - {% newrelic_browser_timing_footer %} diff --git a/tests/framework_django/test_application.py b/tests/framework_django/test_application.py index 1f2616b0fa..82501707b2 100644 --- a/tests/framework_django/test_application.py +++ b/tests/framework_django/test_application.py @@ -12,24 +12,33 @@ # See the License for the specific language governing permissions and # limitations under the License. -from testing_support.fixtures import ( - override_application_settings, - override_generic_settings, override_ignore_status_codes) -from testing_support.validators.validate_code_level_metrics import validate_code_level_metrics -from newrelic.hooks.framework_django import django_settings -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_transaction_errors import validate_transaction_errors - import os import django +from testing_support.fixtures import ( + override_application_settings, + override_generic_settings, + override_ignore_status_codes, +) +from testing_support.validators.validate_code_level_metrics import ( + validate_code_level_metrics, +) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) + +from newrelic.hooks.framework_django import django_settings -DJANGO_VERSION = tuple(map(int, django.get_version().split('.')[:2])) -DJANGO_SETTINGS_MODULE = os.environ.get('DJANGO_SETTINGS_MODULE', None) +DJANGO_VERSION = tuple(map(int, django.get_version().split(".")[:2])) +DJANGO_SETTINGS_MODULE = os.environ.get("DJANGO_SETTINGS_MODULE", None) def target_application(): from _target_application import _target_application + return _target_application @@ -37,272 +46,233 @@ def target_application(): # MIDDLEWARE defined in the version-specific Django settings.py file. _test_django_pre_1_10_middleware_scoped_metrics = [ - (('Function/django.middleware.common:' - 'CommonMiddleware.process_request'), 1), - (('Function/django.contrib.sessions.middleware:' - 'SessionMiddleware.process_request'), 1), - (('Function/django.contrib.auth.middleware:' - 'AuthenticationMiddleware.process_request'), 1), - (('Function/django.contrib.messages.middleware:' - 'MessageMiddleware.process_request'), 1), - (('Function/django.middleware.csrf:' - 'CsrfViewMiddleware.process_view'), 1), - (('Function/django.contrib.messages.middleware:' - 'MessageMiddleware.process_response'), 1), - (('Function/django.middleware.csrf:' - 'CsrfViewMiddleware.process_response'), 1), - (('Function/django.contrib.sessions.middleware:' - 'SessionMiddleware.process_response'), 1), - (('Function/django.middleware.common:' - 'CommonMiddleware.process_response'), 1), - (('Function/django.middleware.gzip:' - 'GZipMiddleware.process_response'), 1), - (('Function/newrelic.hooks.framework_django:' - 'browser_timing_insertion'), 1), + (("Function/django.middleware.common:" "CommonMiddleware.process_request"), 1), + (("Function/django.contrib.sessions.middleware:" "SessionMiddleware.process_request"), 1), + (("Function/django.contrib.auth.middleware:" "AuthenticationMiddleware.process_request"), 1), + (("Function/django.contrib.messages.middleware:" "MessageMiddleware.process_request"), 1), + (("Function/django.middleware.csrf:" "CsrfViewMiddleware.process_view"), 1), + (("Function/django.contrib.messages.middleware:" "MessageMiddleware.process_response"), 1), + (("Function/django.middleware.csrf:" "CsrfViewMiddleware.process_response"), 1), + (("Function/django.contrib.sessions.middleware:" "SessionMiddleware.process_response"), 1), + (("Function/django.middleware.common:" "CommonMiddleware.process_response"), 1), + (("Function/django.middleware.gzip:" "GZipMiddleware.process_response"), 1), + (("Function/newrelic.hooks.framework_django:" "browser_timing_insertion"), 1), ] _test_django_post_1_10_middleware_scoped_metrics = [ - ('Function/django.middleware.security:SecurityMiddleware', 1), - ('Function/django.contrib.sessions.middleware:SessionMiddleware', 1), - ('Function/django.middleware.common:CommonMiddleware', 1), - ('Function/django.middleware.csrf:CsrfViewMiddleware', 1), - ('Function/django.contrib.auth.middleware:AuthenticationMiddleware', 1), - ('Function/django.contrib.messages.middleware:MessageMiddleware', 1), - ('Function/django.middleware.clickjacking:XFrameOptionsMiddleware', 1), - ('Function/django.middleware.gzip:GZipMiddleware', 1), + ("Function/django.middleware.security:SecurityMiddleware", 1), + ("Function/django.contrib.sessions.middleware:SessionMiddleware", 1), + ("Function/django.middleware.common:CommonMiddleware", 1), + ("Function/django.middleware.csrf:CsrfViewMiddleware", 1), + ("Function/django.contrib.auth.middleware:AuthenticationMiddleware", 1), + ("Function/django.contrib.messages.middleware:MessageMiddleware", 1), + ("Function/django.middleware.clickjacking:XFrameOptionsMiddleware", 1), + ("Function/django.middleware.gzip:GZipMiddleware", 1), ] _test_django_pre_1_10_url_resolver_scoped_metrics = [ - ('Function/django.core.urlresolvers:RegexURLResolver.resolve', 'present'), + ("Function/django.core.urlresolvers:RegexURLResolver.resolve", "present"), ] _test_django_post_1_10_url_resolver_scoped_metrics = [ - ('Function/django.urls.resolvers:RegexURLResolver.resolve', 'present'), + ("Function/django.urls.resolvers:RegexURLResolver.resolve", "present"), ] _test_django_post_2_0_url_resolver_scoped_metrics = [ - ('Function/django.urls.resolvers:URLResolver.resolve', 'present'), + ("Function/django.urls.resolvers:URLResolver.resolve", "present"), ] _test_application_index_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/views:index', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/views:index", 1), ] if DJANGO_VERSION >= (1, 5): - _test_application_index_scoped_metrics.extend([ - ('Function/django.http.response:HttpResponse.close', 1)]) + _test_application_index_scoped_metrics.extend([("Function/django.http.response:HttpResponse.close", 1)]) if DJANGO_VERSION < (1, 10): - _test_application_index_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_application_index_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_application_index_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_application_index_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_application_index_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) - -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_application_index_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_application_index_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) + _test_application_index_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) + +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_application_index_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_application_index_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_application_index_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_application_index_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:index', - scoped_metrics=_test_application_index_scoped_metrics) +@validate_transaction_metrics("views:index", scoped_metrics=_test_application_index_scoped_metrics) @validate_code_level_metrics("views", "index") def test_application_index(): test_application = target_application() - response = test_application.get('') - response.mustcontain('INDEX RESPONSE') + response = test_application.get("") + response.mustcontain("INDEX RESPONSE") -@validate_transaction_metrics('views:exception') +@validate_transaction_metrics("views:exception") @validate_code_level_metrics("views", "exception") def test_application_exception(): test_application = target_application() - test_application.get('/exception', status=500) + test_application.get("/exception", status=500) _test_application_not_found_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), ] if DJANGO_VERSION >= (1, 5): - _test_application_not_found_scoped_metrics.extend([ - ('Function/django.http.response:HttpResponseNotFound.close', 1)]) + _test_application_not_found_scoped_metrics.extend([("Function/django.http.response:HttpResponseNotFound.close", 1)]) if DJANGO_VERSION < (1, 10): - _test_application_not_found_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_application_not_found_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_application_not_found_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_application_not_found_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_application_not_found_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) + _test_application_not_found_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_application_not_found_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_application_not_found_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) # The `CsrfViewMiddleware.process_view` isn't called for 404 Not Found. _test_application_not_found_scoped_metrics.remove( - ('Function/django.middleware.csrf:CsrfViewMiddleware.process_view', 1)) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_application_not_found_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) + ("Function/django.middleware.csrf:CsrfViewMiddleware.process_view", 1) + ) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_application_not_found_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_application_not_found_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_application_not_found_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) # The `CsrfViewMiddleware.process_view` isn't called for 404 Not Found. _test_application_not_found_scoped_metrics.remove( - ('Function/django.middleware.csrf:CsrfViewMiddleware.process_view', 1)) + ("Function/django.middleware.csrf:CsrfViewMiddleware.process_view", 1) + ) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('django.views.debug:technical_404_response', - scoped_metrics=_test_application_not_found_scoped_metrics) +@validate_transaction_metrics( + "django.views.debug:technical_404_response", scoped_metrics=_test_application_not_found_scoped_metrics +) def test_application_not_found(): test_application = target_application() - test_application.get('/not_found', status=404) + test_application.get("/not_found", status=404) @override_ignore_status_codes([403]) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:permission_denied') +@validate_transaction_metrics("views:permission_denied") @validate_code_level_metrics("views", "permission_denied") def test_ignored_status_code(): test_application = target_application() - test_application.get('/permission_denied', status=403) + test_application.get("/permission_denied", status=403) @override_ignore_status_codes([410]) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:middleware_410') +@validate_transaction_metrics("views:middleware_410") @validate_code_level_metrics("views", "middleware_410") def test_middleware_ignore_status_codes(): test_application = target_application() - test_application.get('/middleware_410', status=410) + test_application.get("/middleware_410", status=410) _test_application_cbv_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/views:MyView', 1), - ('Function/views:MyView.get', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/views:MyView", 1), + ("Function/views:MyView.get", 1), ] if DJANGO_VERSION >= (1, 5): - _test_application_cbv_scoped_metrics.extend([ - ('Function/django.http.response:HttpResponse.close', 1)]) + _test_application_cbv_scoped_metrics.extend([("Function/django.http.response:HttpResponse.close", 1)]) if DJANGO_VERSION < (1, 10): - _test_application_cbv_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_application_cbv_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_application_cbv_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_application_cbv_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_application_cbv_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) - -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_application_cbv_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_application_cbv_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) + _test_application_cbv_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) + +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_application_cbv_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_application_cbv_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_application_cbv_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_application_cbv_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:MyView.get', - scoped_metrics=_test_application_cbv_scoped_metrics) +@validate_transaction_metrics("views:MyView.get", scoped_metrics=_test_application_cbv_scoped_metrics) @validate_code_level_metrics("views.MyView", "get") def test_application_cbv(): test_application = target_application() - response = test_application.get('/cbv') - response.mustcontain('CBV RESPONSE') + response = test_application.get("/cbv") + response.mustcontain("CBV RESPONSE") _test_application_deferred_cbv_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/views:deferred_cbv', 1), - ('Function/views:MyView.get', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/views:deferred_cbv", 1), + ("Function/views:MyView.get", 1), ] if DJANGO_VERSION >= (1, 5): - _test_application_deferred_cbv_scoped_metrics.extend([ - ('Function/django.http.response:HttpResponse.close', 1)]) + _test_application_deferred_cbv_scoped_metrics.extend([("Function/django.http.response:HttpResponse.close", 1)]) if DJANGO_VERSION < (1, 10): - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) - -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) + +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_application_deferred_cbv_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_application_deferred_cbv_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:deferred_cbv', - scoped_metrics=_test_application_deferred_cbv_scoped_metrics) +@validate_transaction_metrics("views:deferred_cbv", scoped_metrics=_test_application_deferred_cbv_scoped_metrics) @validate_code_level_metrics("views", "deferred_cbv") def test_application_deferred_cbv(): test_application = target_application() - response = test_application.get('/deferred_cbv') - response.mustcontain('CBV RESPONSE') + response = test_application.get("/deferred_cbv") + response.mustcontain("CBV RESPONSE") _test_html_insertion_settings = { - 'browser_monitoring.enabled': True, - 'browser_monitoring.auto_instrument': True, - 'js_agent_loader': u'', + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": True, + "js_agent_loader": "", } @override_application_settings(_test_html_insertion_settings) def test_html_insertion_django_middleware(): test_application = target_application() - response = test_application.get('/html_insertion', status=200) + response = test_application.get("/html_insertion", status=200) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. - response.mustcontain('NREUM HEADER', 'NREUM.info') + response.mustcontain("NREUM HEADER", "NREUM.info") @override_application_settings(_test_html_insertion_settings) @@ -311,23 +281,22 @@ def test_html_insertion_django_gzip_middleware_enabled(): # GZipMiddleware only fires if given the following header. - gzip_header = {'Accept-Encoding': 'gzip'} - response = test_application.get('/gzip_html_insertion', status=200, - headers=gzip_header) + gzip_header = {"Accept-Encoding": "gzip"} + response = test_application.get("/gzip_html_insertion", status=200, headers=gzip_header) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. # The response.text will already be gunzipped - response.mustcontain('NREUM HEADER', 'NREUM.info') + response.mustcontain("NREUM HEADER", "NREUM.info") _test_html_insertion_settings_disabled = { - 'browser_monitoring.enabled': False, - 'browser_monitoring.auto_instrument': False, - 'js_agent_loader': u'', + "browser_monitoring.enabled": False, + "browser_monitoring.auto_instrument": False, + "js_agent_loader": "", } @@ -337,264 +306,238 @@ def test_html_insertion_django_gzip_middleware_disabled(): # GZipMiddleware only fires if given the following header. - gzip_header = {'Accept-Encoding': 'gzip'} - response = test_application.get('/gzip_html_insertion', status=200, - headers=gzip_header) + gzip_header = {"Accept-Encoding": "gzip"} + response = test_application.get("/gzip_html_insertion", status=200, headers=gzip_header) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. # The response.text will already be gunzipped - response.mustcontain(no=['NREUM HEADER', 'NREUM.info']) + response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) _test_html_insertion_manual_settings = { - 'browser_monitoring.enabled': True, - 'browser_monitoring.auto_instrument': True, - 'js_agent_loader': u'', + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": True, + "js_agent_loader": "", } @override_application_settings(_test_html_insertion_manual_settings) def test_html_insertion_manual_django_middleware(): test_application = target_application() - response = test_application.get('/html_insertion_manual', status=200) + response = test_application.get("/html_insertion_manual", status=200) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. - response.mustcontain(no=['NREUM HEADER', 'NREUM.info']) + response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @override_application_settings(_test_html_insertion_settings) def test_html_insertion_unnamed_attachment_header_django_middleware(): test_application = target_application() - response = test_application.get( - '/html_insertion_unnamed_attachment_header', status=200) + response = test_application.get("/html_insertion_unnamed_attachment_header", status=200) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. - response.mustcontain(no=['NREUM HEADER', 'NREUM.info']) + response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) @override_application_settings(_test_html_insertion_settings) def test_html_insertion_named_attachment_header_django_middleware(): test_application = target_application() - response = test_application.get( - '/html_insertion_named_attachment_header', status=200) + response = test_application.get("/html_insertion_named_attachment_header", status=200) # The 'NREUM HEADER' value comes from our override for the header. # The 'NREUM.info' value comes from the programmatically generated - # footer added by the agent. + # header added by the agent. - response.mustcontain(no=['NREUM HEADER', 'NREUM.info']) + response.mustcontain(no=["NREUM HEADER", "NREUM.info"]) _test_html_insertion_settings = { - 'browser_monitoring.enabled': True, - 'browser_monitoring.auto_instrument': False, - 'js_agent_loader': u'', + "browser_monitoring.enabled": True, + "browser_monitoring.auto_instrument": False, + "js_agent_loader": "", } @override_application_settings(_test_html_insertion_settings) def test_html_insertion_manual_tag_instrumentation(): test_application = target_application() - response = test_application.get('/template_tags') + response = test_application.get("/template_tags") # Assert that the instrumentation is not inappropriately escaped - response.mustcontain('', - no=['<!-- NREUM HEADER -->']) + response.mustcontain("", no=["<!-- NREUM HEADER -->"]) _test_application_inclusion_tag_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/views:inclusion_tag', 1), - ('Template/Render/main.html', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/views:inclusion_tag", 1), + ("Template/Render/main.html", 1), ] if DJANGO_VERSION < (1, 9): - _test_application_inclusion_tag_scoped_metrics.extend([ - ('Template/Include/results.html', 1)]) + _test_application_inclusion_tag_scoped_metrics.extend([("Template/Include/results.html", 1)]) if DJANGO_VERSION < (1, 10): - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) - -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) + +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_application_inclusion_tag_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_application_inclusion_tag_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) try: _test_application_inclusion_tag_scoped_metrics.remove( - (('Function/newrelic.hooks.framework_django:' - 'browser_timing_insertion'), 1) + (("Function/newrelic.hooks.framework_django:" "browser_timing_insertion"), 1) ) except ValueError: pass @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:inclusion_tag', - scoped_metrics=_test_application_inclusion_tag_scoped_metrics) +@validate_transaction_metrics("views:inclusion_tag", scoped_metrics=_test_application_inclusion_tag_scoped_metrics) @validate_code_level_metrics("views", "inclusion_tag") def test_application_inclusion_tag(): test_application = target_application() - response = test_application.get('/inclusion_tag') - response.mustcontain('Inclusion tag') + response = test_application.get("/inclusion_tag") + response.mustcontain("Inclusion tag") _test_inclusion_tag_template_tags_scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/views:inclusion_tag', 1), - ('Template/Render/main.html', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/views:inclusion_tag", 1), + ("Template/Render/main.html", 1), ] if DJANGO_VERSION < (1, 9): - _test_inclusion_tag_template_tags_scoped_metrics.extend([ - ('Template/Include/results.html', 1), - ('Template/Tag/show_results', 1)]) + _test_inclusion_tag_template_tags_scoped_metrics.extend( + [("Template/Include/results.html", 1), ("Template/Tag/show_results", 1)] + ) -_test_inclusion_tag_settings = { - 'instrumentation.templates.inclusion_tag': '*' -} +_test_inclusion_tag_settings = {"instrumentation.templates.inclusion_tag": "*"} if DJANGO_VERSION < (1, 10): - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_post_1_10_url_resolver_scoped_metrics) -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_post_1_10_middleware_scoped_metrics) +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_inclusion_tag_template_tags_scoped_metrics.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_inclusion_tag_template_tags_scoped_metrics.extend(_test_django_pre_1_10_middleware_scoped_metrics) try: _test_inclusion_tag_template_tags_scoped_metrics.remove( - (('Function/newrelic.hooks.framework_django:' - 'browser_timing_insertion'), 1) + (("Function/newrelic.hooks.framework_django:" "browser_timing_insertion"), 1) ) except ValueError: pass @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:inclusion_tag', - scoped_metrics=_test_inclusion_tag_template_tags_scoped_metrics) +@validate_transaction_metrics("views:inclusion_tag", scoped_metrics=_test_inclusion_tag_template_tags_scoped_metrics) @override_generic_settings(django_settings, _test_inclusion_tag_settings) @validate_code_level_metrics("views", "inclusion_tag") def test_inclusion_tag_template_tag_metric(): test_application = target_application() - response = test_application.get('/inclusion_tag') - response.mustcontain('Inclusion tag') + response = test_application.get("/inclusion_tag") + response.mustcontain("Inclusion tag") _test_template_render_exception_scoped_metrics_base = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), ] if DJANGO_VERSION < (1, 5): _test_template_render_exception_scoped_metrics_base.append( - ('Function/django.http:HttpResponseServerError.close', 1)) + ("Function/django.http:HttpResponseServerError.close", 1) + ) elif DJANGO_VERSION < (1, 8): _test_template_render_exception_scoped_metrics_base.append( - ('Function/django.http.response:HttpResponseServerError.close', 1)) + ("Function/django.http.response:HttpResponseServerError.close", 1) + ) else: - _test_template_render_exception_scoped_metrics_base.append( - ('Function/django.http.response:HttpResponse.close', 1)) + _test_template_render_exception_scoped_metrics_base.append(("Function/django.http.response:HttpResponse.close", 1)) if DJANGO_VERSION < (1, 10): - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_pre_1_10_url_resolver_scoped_metrics) + _test_template_render_exception_scoped_metrics_base.extend(_test_django_pre_1_10_url_resolver_scoped_metrics) elif DJANGO_VERSION >= (2, 0): - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_post_2_0_url_resolver_scoped_metrics) + _test_template_render_exception_scoped_metrics_base.extend(_test_django_post_2_0_url_resolver_scoped_metrics) else: - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_post_1_10_url_resolver_scoped_metrics) - -if DJANGO_SETTINGS_MODULE == 'settings_0110_old': - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_pre_1_10_middleware_scoped_metrics) -elif DJANGO_SETTINGS_MODULE == 'settings_0110_new': - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_post_1_10_middleware_scoped_metrics) + _test_template_render_exception_scoped_metrics_base.extend(_test_django_post_1_10_url_resolver_scoped_metrics) + +if DJANGO_SETTINGS_MODULE == "settings_0110_old": + _test_template_render_exception_scoped_metrics_base.extend(_test_django_pre_1_10_middleware_scoped_metrics) +elif DJANGO_SETTINGS_MODULE == "settings_0110_new": + _test_template_render_exception_scoped_metrics_base.extend(_test_django_post_1_10_middleware_scoped_metrics) elif DJANGO_VERSION < (1, 10): - _test_template_render_exception_scoped_metrics_base.extend( - _test_django_pre_1_10_middleware_scoped_metrics) + _test_template_render_exception_scoped_metrics_base.extend(_test_django_pre_1_10_middleware_scoped_metrics) if DJANGO_VERSION < (1, 9): - _test_template_render_exception_errors = [ - 'django.template.base:TemplateSyntaxError'] + _test_template_render_exception_errors = ["django.template.base:TemplateSyntaxError"] else: - _test_template_render_exception_errors = [ - 'django.template.exceptions:TemplateSyntaxError'] + _test_template_render_exception_errors = ["django.template.exceptions:TemplateSyntaxError"] -_test_template_render_exception_function_scoped_metrics = list( - _test_template_render_exception_scoped_metrics_base) -_test_template_render_exception_function_scoped_metrics.extend([ - ('Function/views:render_exception_function', 1), -]) +_test_template_render_exception_function_scoped_metrics = list(_test_template_render_exception_scoped_metrics_base) +_test_template_render_exception_function_scoped_metrics.extend( + [ + ("Function/views:render_exception_function", 1), + ] +) @validate_transaction_errors(errors=_test_template_render_exception_errors) -@validate_transaction_metrics('views:render_exception_function', - scoped_metrics=_test_template_render_exception_function_scoped_metrics) +@validate_transaction_metrics( + "views:render_exception_function", scoped_metrics=_test_template_render_exception_function_scoped_metrics +) @validate_code_level_metrics("views", "render_exception_function") def test_template_render_exception_function(): test_application = target_application() - test_application.get('/render_exception_function', status=500) + test_application.get("/render_exception_function", status=500) -_test_template_render_exception_class_scoped_metrics = list( - _test_template_render_exception_scoped_metrics_base) -_test_template_render_exception_class_scoped_metrics.extend([ - ('Function/views:RenderExceptionClass', 1), - ('Function/views:RenderExceptionClass.get', 1), -]) +_test_template_render_exception_class_scoped_metrics = list(_test_template_render_exception_scoped_metrics_base) +_test_template_render_exception_class_scoped_metrics.extend( + [ + ("Function/views:RenderExceptionClass", 1), + ("Function/views:RenderExceptionClass.get", 1), + ] +) @validate_transaction_errors(errors=_test_template_render_exception_errors) -@validate_transaction_metrics('views:RenderExceptionClass.get', - scoped_metrics=_test_template_render_exception_class_scoped_metrics) +@validate_transaction_metrics( + "views:RenderExceptionClass.get", scoped_metrics=_test_template_render_exception_class_scoped_metrics +) @validate_code_level_metrics("views.RenderExceptionClass", "get") def test_template_render_exception_class(): test_application = target_application() - test_application.get('/render_exception_class', status=500) + test_application.get("/render_exception_class", status=500) diff --git a/tests/framework_django/views.py b/tests/framework_django/views.py index c5ce1526c7..e97e273ded 100644 --- a/tests/framework_django/views.py +++ b/tests/framework_django/views.py @@ -12,22 +12,21 @@ # See the License for the specific language governing permissions and # limitations under the License. +from django.core.exceptions import PermissionDenied from django.http import HttpResponse -from django.views.generic.base import View, TemplateView from django.shortcuts import render -from django.core.exceptions import PermissionDenied +from django.views.generic.base import TemplateView, View from middleware import Custom410 -from newrelic.api.transaction import (get_browser_timing_header, - get_browser_timing_footer) +from newrelic.api.transaction import get_browser_timing_header def index(request): - return HttpResponse('INDEX RESPONSE') + return HttpResponse("INDEX RESPONSE") def exception(request): - raise RuntimeError('exception') + raise RuntimeError("exception") def permission_denied(request): @@ -40,7 +39,7 @@ def middleware_410(request): class MyView(View): def get(self, request): - return HttpResponse('CBV RESPONSE') + return HttpResponse("CBV RESPONSE") def deferred_cbv(request): @@ -48,69 +47,77 @@ def deferred_cbv(request): def html_insertion(request): - return HttpResponse('Some header' - '

My First Heading

My first paragraph.

' - '') + return HttpResponse( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) def html_insertion_content_length(request): - content = ('Some header' - '

My First Heading

My first paragraph.

' - '') + content = ( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) response = HttpResponse(content) - response['Content-Length'] = len(content) + response["Content-Length"] = len(content) return response def html_insertion_manual(request): header = get_browser_timing_header() - footer = get_browser_timing_footer() - header = get_browser_timing_header() - footer = get_browser_timing_footer() - assert header == '' - assert footer == '' + assert header == "" - return HttpResponse('Some header' - '

My First Heading

My first paragraph.

' - '') + return HttpResponse( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) def html_insertion_unnamed_attachment_header(request): - response = HttpResponse('Some header' - '

My First Heading

My first paragraph.

' - '') - response['Content-Disposition'] = 'attachment' + response = HttpResponse( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) + response["Content-Disposition"] = "attachment" return response def html_insertion_named_attachment_header(request): - response = HttpResponse('Some header' - '

My First Heading

My first paragraph.

' - '') - response['Content-Disposition'] = 'Attachment; filename="X"' + response = HttpResponse( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) + response["Content-Disposition"] = 'Attachment; filename="X"' return response def inclusion_tag(request): - return render(request, 'main.html', {}, content_type="text/html") + return render(request, "main.html", {}, content_type="text/html") def template_tags(request): - return render(request, 'main.html', {}, content_type="text/html") + return render(request, "main.html", {}, content_type="text/html") def render_exception_function(request): - return render(request, 'render_exception.html') + return render(request, "render_exception.html") class RenderExceptionClass(TemplateView): - template_name = 'render_exception.html' + template_name = "render_exception.html" def gzip_html_insertion(request): # contents must be at least 200 bytes for gzip middleware to work - contents = '*' * 200 - return HttpResponse('Some header' - '

My First Heading

%s

' % contents) + contents = "*" * 200 + return HttpResponse( + "Some header" + "

My First Heading

%s

" % contents + ) diff --git a/tests/framework_flask/_test_compress.py b/tests/framework_flask/_test_compress.py index f3c9fbf2be..1fbf207689 100644 --- a/tests/framework_flask/_test_compress.py +++ b/tests/framework_flask/_test_compress.py @@ -18,14 +18,10 @@ import StringIO as IO import webtest - -from flask import Flask -from flask import Response -from flask import send_file +from flask import Flask, Response, send_file from flask_compress import Compress -from newrelic.api.transaction import (get_browser_timing_header, - get_browser_timing_footer) +from newrelic.api.transaction import get_browser_timing_header application = Flask(__name__) @@ -33,57 +29,57 @@ compress.init_app(application) -@application.route('/compress') +@application.route("/compress") def index_page(): - return '' + 500 * 'X' + '' + return "" + 500 * "X" + "" -@application.route('/html_insertion') +@application.route("/html_insertion") def html_insertion(): - return ('Some header' - '

My First Heading

My first paragraph.

' - '') + return ( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) -@application.route('/html_insertion_manual') +@application.route("/html_insertion_manual") def html_insertion_manual(): header = get_browser_timing_header() - footer = get_browser_timing_footer() - header = get_browser_timing_header() - footer = get_browser_timing_footer() - assert header == '' - assert footer == '' + assert header == "" - return ('Some header' - '

My First Heading

My first paragraph.

' - '') + return ( + "Some header" + "

My First Heading

My first paragraph.

" + "" + ) -@application.route('/html_insertion_unnamed_attachment_header') +@application.route("/html_insertion_unnamed_attachment_header") def html_insertion_unnamed_attachment_header(): response = Response( - response='Some header' - '

My First Heading

My first paragraph.

' - '') - response.headers.add('Content-Disposition', - 'attachment') + response="Some header" + "

My First Heading

My first paragraph.

" + "" + ) + response.headers.add("Content-Disposition", "attachment") return response -@application.route('/html_insertion_named_attachment_header') +@application.route("/html_insertion_named_attachment_header") def html_insertion_named_attachment_header(): response = Response( - response='Some header' - '

My First Heading

My first paragraph.

' - '') - response.headers.add('Content-Disposition', - 'attachment; filename="X"') + response="Some header" + "

My First Heading

My first paragraph.

" + "" + ) + response.headers.add("Content-Disposition", 'attachment; filename="X"') return response -@application.route('/html_served_from_file') +@application.route("/html_served_from_file") def html_served_from_file(): file = IO() contents = b""" @@ -93,10 +89,10 @@ def html_served_from_file(): """ file.write(contents) file.seek(0) - return send_file(file, mimetype='text/html') + return send_file(file, mimetype="text/html") -@application.route('/text_served_from_file') +@application.route("/text_served_from_file") def text_served_from_file(): file = IO() contents = b""" @@ -106,17 +102,19 @@ def text_served_from_file(): """ file.write(contents) file.seek(0) - return send_file(file, mimetype='text/plain') + return send_file(file, mimetype="text/plain") _test_application = webtest.TestApp(application) -@application.route('/empty_content_type') +@application.route("/empty_content_type") def empty_content_type(): response = Response( - response='Some header' - '

My First Heading

My first paragraph.

' - '', mimetype='') + response="Some header" + "

My First Heading

My first paragraph.

" + "", + mimetype="", + ) assert response.mimetype is None return response diff --git a/tests/framework_flask/test_application.py b/tests/framework_flask/test_application.py index de7a430191..508fb68934 100644 --- a/tests/framework_flask/test_application.py +++ b/tests/framework_flask/test_application.py @@ -13,23 +13,29 @@ # limitations under the License. import pytest - +from conftest import async_handler_support, skip_if_not_async_handler_support from testing_support.fixtures import ( override_application_settings, - validate_tt_parenting) -from testing_support.validators.validate_code_level_metrics import validate_code_level_metrics -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_transaction_errors import validate_transaction_errors + validate_tt_parenting, +) +from testing_support.validators.validate_code_level_metrics import ( + validate_code_level_metrics, +) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) from newrelic.packages import six -from conftest import async_handler_support, skip_if_not_async_handler_support - try: # The __version__ attribute was only added in 0.7.0. # Flask team does not use semantic versioning during development. from flask import __version__ as flask_version - flask_version = tuple([int(v) for v in flask_version.split('.')]) + + flask_version = tuple([int(v) for v in flask_version.split(".")]) is_gt_flask060 = True is_dev_version = False except ValueError: @@ -39,8 +45,7 @@ is_gt_flask060 = False is_dev_version = False -requires_endpoint_decorator = pytest.mark.skipif(not is_gt_flask060, - reason="The endpoint decorator is not supported.") +requires_endpoint_decorator = pytest.mark.skipif(not is_gt_flask060, reason="The endpoint decorator is not supported.") def target_application(): @@ -61,226 +66,254 @@ def target_application(): _test_application_index_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:index_page', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application:index_page", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), +] _test_application_index_tt_parenting = ( - 'TransactionNode', [ - ('FunctionNode', [ - ('FunctionNode', [ - ('FunctionNode', []), - ('FunctionNode', []), - ('FunctionNode', []), - # some flask versions have more FunctionNodes here, as appended - # below - ]), - ]), - ('FunctionNode', []), - ('FunctionNode', [ - ('FunctionNode', []), - ]), - ] + "TransactionNode", + [ + ( + "FunctionNode", + [ + ( + "FunctionNode", + [ + ("FunctionNode", []), + ("FunctionNode", []), + ("FunctionNode", []), + # some flask versions have more FunctionNodes here, as appended + # below + ], + ), + ], + ), + ("FunctionNode", []), + ( + "FunctionNode", + [ + ("FunctionNode", []), + ], + ), + ], ) if is_dev_version or (is_gt_flask060 and flask_version >= (0, 7)): _test_application_index_tt_parenting[1][0][1][0][1].append( - ('FunctionNode', []), + ("FunctionNode", []), ) if is_dev_version or (is_gt_flask060 and flask_version >= (0, 9)): _test_application_index_tt_parenting[1][0][1][0][1].append( - ('FunctionNode', []), + ("FunctionNode", []), ) + @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_test_application:index_page', - scoped_metrics=_test_application_index_scoped_metrics) +@validate_transaction_metrics("_test_application:index_page", scoped_metrics=_test_application_index_scoped_metrics) @validate_tt_parenting(_test_application_index_tt_parenting) @validate_code_level_metrics("_test_application", "index_page") def test_application_index(): application = target_application() - response = application.get('/index') - response.mustcontain('INDEX RESPONSE') + response = application.get("/index") + response.mustcontain("INDEX RESPONSE") + _test_application_async_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application_async:async_page', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application_async:async_page", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), +] + @skip_if_not_async_handler_support @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_test_application_async:async_page', - scoped_metrics=_test_application_async_scoped_metrics) +@validate_transaction_metrics( + "_test_application_async:async_page", scoped_metrics=_test_application_async_scoped_metrics +) @validate_tt_parenting(_test_application_index_tt_parenting) @validate_code_level_metrics("_test_application_async", "async_page") def test_application_async(): application = target_application() - response = application.get('/async') - response.mustcontain('ASYNC RESPONSE') + response = application.get("/async") + response.mustcontain("ASYNC RESPONSE") + _test_application_endpoint_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:endpoint_page', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application:endpoint_page", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), +] @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_test_application:endpoint_page', - scoped_metrics=_test_application_endpoint_scoped_metrics) +@validate_transaction_metrics( + "_test_application:endpoint_page", scoped_metrics=_test_application_endpoint_scoped_metrics +) @validate_code_level_metrics("_test_application", "endpoint_page") def test_application_endpoint(): application = target_application() - response = application.get('/endpoint') - response.mustcontain('ENDPOINT RESPONSE') + response = application.get("/endpoint") + response.mustcontain("ENDPOINT RESPONSE") _test_application_error_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:error_page', 1), - ('Function/flask.app:Flask.handle_exception', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1), - ('Function/flask.app:Flask.handle_user_exception', 1), - ('Function/flask.app:Flask.handle_user_exception', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application:error_page", 1), + ("Function/flask.app:Flask.handle_exception", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), + ("Function/flask.app:Flask.handle_user_exception", 1), + ("Function/flask.app:Flask.handle_user_exception", 1), +] if six.PY3: - _test_application_error_errors = ['builtins:RuntimeError'] + _test_application_error_errors = ["builtins:RuntimeError"] else: - _test_application_error_errors = ['exceptions:RuntimeError'] + _test_application_error_errors = ["exceptions:RuntimeError"] @validate_transaction_errors(errors=_test_application_error_errors) -@validate_transaction_metrics('_test_application:error_page', - scoped_metrics=_test_application_error_scoped_metrics) +@validate_transaction_metrics("_test_application:error_page", scoped_metrics=_test_application_error_scoped_metrics) @validate_code_level_metrics("_test_application", "error_page") def test_application_error(): application = target_application() - application.get('/error', status=500, expect_errors=True) + application.get("/error", status=500, expect_errors=True) _test_application_abort_404_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:abort_404_page', 1), - ('Function/flask.app:Flask.handle_http_exception', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1), - ('Function/flask.app:Flask.handle_user_exception', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application:abort_404_page", 1), + ("Function/flask.app:Flask.handle_http_exception", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), + ("Function/flask.app:Flask.handle_user_exception", 1), +] @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_test_application:abort_404_page', - scoped_metrics=_test_application_abort_404_scoped_metrics) +@validate_transaction_metrics( + "_test_application:abort_404_page", scoped_metrics=_test_application_abort_404_scoped_metrics +) @validate_code_level_metrics("_test_application", "abort_404_page") def test_application_abort_404(): application = target_application() - application.get('/abort_404', status=404) + application.get("/abort_404", status=404) _test_application_exception_404_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:exception_404_page', 1), - ('Function/flask.app:Flask.handle_http_exception', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1), - ('Function/flask.app:Flask.handle_user_exception', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/_test_application:exception_404_page", 1), + ("Function/flask.app:Flask.handle_http_exception", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), + ("Function/flask.app:Flask.handle_user_exception", 1), +] @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('_test_application:exception_404_page', - scoped_metrics=_test_application_exception_404_scoped_metrics) +@validate_transaction_metrics( + "_test_application:exception_404_page", scoped_metrics=_test_application_exception_404_scoped_metrics +) @validate_code_level_metrics("_test_application", "exception_404_page") def test_application_exception_404(): application = target_application() - application.get('/exception_404', status=404) + application.get("/exception_404", status=404) _test_application_not_found_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/flask.app:Flask.handle_http_exception', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1), - ('Function/flask.app:Flask.handle_user_exception', 1)] + ("Function/flask.app:Flask.wsgi_app", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + ("Function/flask.app:Flask.handle_http_exception", 1), + ("Function/werkzeug.wsgi:ClosingIterator.close", 1), + ("Function/flask.app:Flask.handle_user_exception", 1), +] @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('flask.app:Flask.handle_http_exception', - scoped_metrics=_test_application_not_found_scoped_metrics) +@validate_transaction_metrics( + "flask.app:Flask.handle_http_exception", scoped_metrics=_test_application_not_found_scoped_metrics +) def test_application_not_found(): application = target_application() - application.get('/missing', status=404) + application.get("/missing", status=404) _test_application_render_template_string_scoped_metrics = [ - ('Function/flask.app:Flask.wsgi_app', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - ('Function/_test_application:template_string', 1), - ('Function/werkzeug.wsgi:ClosingIterator.close', 1), - ('Template/Compile/