Skip to content

Conversation

purple4reina
Copy link
Contributor

@purple4reina purple4reina commented Oct 1, 2025

What does this PR do?

  1. Runs unit tests twice daily using the latest datadog-lambda code from main and the most recently released ddtrace.
  2. Adds missing tests for specific imports within datadog_lambda.wrapper

Motivation

#661

Testing Guidelines

Additional Notes

We should ideally also have similar tests in dd-trace-py to ensure that these methods are not renamed in the first place.

Types of Changes

  • Bug fix
  • New feature
  • Breaking change
  • Misc (docs, refactoring, dependency upgrade, etc.)

Check all that apply

  • This PR's description is comprehensive
  • This PR contains breaking changes that are documented in the description
  • This PR introduces new APIs or parameters that are documented and unlikely to change in the foreseeable future
  • This PR impacts documentation, and it has been updated (or a ticket has been logged)
  • This PR's changes are covered by the automated tests
  • This PR collects user input/sensitive content into Datadog
  • This PR passes the integration tests (ask a Datadog member to run the tests)

@purple4reina purple4reina requested review from a team as code owners October 1, 2025 17:27
@purple4reina purple4reina force-pushed the rey.abolofia/replay-testing branch from e330818 to a4b9c9d Compare October 1, 2025 18:10
@lucaspimentel
Copy link
Member

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines +794 to +901
@patch("datadog_lambda.config.Config.exception_replay_enabled", True)
def test_exception_replay_enabled(monkeypatch):
importlib.reload(wrapper)

original_SpanExceptionHandler_enable = wrapper.SpanExceptionHandler.enable
SpanExceptionHandler_enable_calls = []

def SpanExceptionHandler_enable(*args, **kwargs):
SpanExceptionHandler_enable_calls.append((args, kwargs))
return original_SpanExceptionHandler_enable(*args, **kwargs)

original_SignalUploader_periodic = wrapper.SignalUploader.periodic
SignalUploader_periodic_calls = []

def SignalUploader_periodic(*args, **kwargs):
SignalUploader_periodic_calls.append((args, kwargs))
return original_SignalUploader_periodic(*args, **kwargs)

monkeypatch.setattr(
"datadog_lambda.wrapper.SpanExceptionHandler.enable",
SpanExceptionHandler_enable,
)
monkeypatch.setattr(
"datadog_lambda.wrapper.SignalUploader.periodic", SignalUploader_periodic
)

expected_response = {
"statusCode": 200,
"body": "This should be returned",
}

@wrapper.datadog_lambda_wrapper
def lambda_handler(event, context):
return expected_response

response = lambda_handler({}, get_mock_context())

assert response == expected_response
assert len(SpanExceptionHandler_enable_calls) == 1
assert len(SignalUploader_periodic_calls) == 1


@patch("datadog_lambda.config.Config.profiling_enabled", True)
def test_profiling_enabled(monkeypatch):
importlib.reload(wrapper)

original_Profiler_start = wrapper.profiler.Profiler.start
Profiler_start_calls = []

def Profiler_start(*args, **kwargs):
Profiler_start_calls.append((args, kwargs))
return original_Profiler_start(*args, **kwargs)

monkeypatch.setattr("datadog_lambda.wrapper.is_new_sandbox", lambda: True)
monkeypatch.setattr(
"datadog_lambda.wrapper.profiler.Profiler.start", Profiler_start
)

expected_response = {
"statusCode": 200,
"body": "This should be returned",
}

@wrapper.datadog_lambda_wrapper
def lambda_handler(event, context):
return expected_response

response = lambda_handler({}, get_mock_context())

assert response == expected_response
assert len(Profiler_start_calls) == 1


@patch("datadog_lambda.config.Config.llmobs_enabled", True)
def test_llmobs_enabled(monkeypatch):
importlib.reload(wrapper)

original_LLMObs_enable = wrapper.LLMObs.enable
LLMObs_enable_calls = []

def LLMObs_enable(*args, **kwargs):
LLMObs_enable_calls.append((args, kwargs))
return original_LLMObs_enable(*args, **kwargs)

original_LLMObs_flush = wrapper.LLMObs.flush
LLMObs_flush_calls = []

def LLMObs_flush(*args, **kwargs):
LLMObs_flush_calls.append((args, kwargs))
return original_LLMObs_flush(*args, **kwargs)

monkeypatch.setattr("datadog_lambda.wrapper.LLMObs.enable", LLMObs_enable)
monkeypatch.setattr("datadog_lambda.wrapper.LLMObs.flush", LLMObs_flush)

expected_response = {
"statusCode": 200,
"body": "This should be returned",
}

@wrapper.datadog_lambda_wrapper
def lambda_handler(event, context):
return expected_response

response = lambda_handler({}, get_mock_context())

assert response == expected_response
assert len(LLMObs_enable_calls) == 1
assert len(LLMObs_flush_calls) == 1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid starting real ddtrace instrumentation in unit tests

The new tests monkeypatch the ddtrace hooks but still call the original implementations (SpanExceptionHandler.enable, SignalUploader.periodic, profiler.Profiler.start, LLMObs.enable/flush). Executing those real implementations requires optional ddtrace extras and spins up background workers; in environments where profiling or exception-replay components are not installed or configured, the imports and start()/enable() calls will raise or leave threads running, causing CI failures and flaky behavior. To verify that the wrapper invokes these hooks, stub them and record the calls rather than invoking the underlying implementations.

Useful? React with 👍 / 👎.

Copy link
Contributor

@joeyzhao2018 joeyzhao2018 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ltgm

@purple4reina purple4reina merged commit 09143fe into main Oct 3, 2025
84 checks passed
@purple4reina purple4reina deleted the rey.abolofia/replay-testing branch October 3, 2025 18:04
purple4reina added a commit to DataDog/dd-trace-py that referenced this pull request Oct 6, 2025
…python still valid (#14746)

## Description

<!-- Provide an overview of the change and motivation for the change -->

Customer reported that a class name changed in the most recent version
of ddtrace which was then causing errors when attempting to import
`datadog_lambda`. See
DataDog/datadog-lambda-python#661 and
#14653.

## Testing

<!-- Describe your testing strategy or note what tests are included -->

## Risks

<!-- Note any risks associated with this change, or "None" if no risks
-->

It's just some tests, so risk is low.

## Additional Notes

<!-- Any other information that would be helpful for reviewers -->

Added additional tests on the `datadog_lambda` side as well, including
running our unit tests twice a day. See
DataDog/datadog-lambda-python#662.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants