You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The spot market E2E tests make a real spot market bid, and are thus very sensitive to changes in spot market pricing. We've done some work in the past to stabilize them by making it more likely that the tests will pick a winning bid amount without exceeding the maximum bid, but we still sometimes fail to do that. From my vague memory of dealing with these tests before, if we bid lower the tests frequently (always?) time out because they didn't bid high enough to actually get a device.
One way to deal with this would be to set up the E2E tests to use a mock API for tests that need to win a device. If we don't already have tests that are meant to fail to get a device or meant to overbid, we should consider adding those; they can still hit the live API because we should be able to reliably under- or over-bid on purpose.
=== NAME TestAccMetalSpotMarketRequestCreate_WithTimeout
resource_metal_spot_market_request_acc_test.go:245: Step 1/1, expected an error with pattern, no match on: Error running apply: exit status 1
Error: POST https://api.equinix.com/metal/v1/projects/0667bb71-9428-4919-8c1d-58fc0fb95613/spot-market-requests?include=devices%2Cproject%2Cplan: 422 2.21 exceeds the maximum bid price of 2.20
with equinix_metal_spot_market_request.request,
on terraform_plugin_test.tf line 122, in resource "equinix_metal_spot_market_request" "request":
122: resource "equinix_metal_spot_market_request" "request" {
We would want to avoid creating even temporary price skews or evicting real users to satisfy our E2E tests. Since E2E tests are somewhat sticky, running multiple times per day, they could impact the norm unless there are safeguards in place.
The TF E2E tests would have an unfair advantage in an open spot market.
The spot market E2E tests make a real spot market bid, and are thus very sensitive to changes in spot market pricing. We've done some work in the past to stabilize them by making it more likely that the tests will pick a winning bid amount without exceeding the maximum bid, but we still sometimes fail to do that. From my vague memory of dealing with these tests before, if we bid lower the tests frequently (always?) time out because they didn't bid high enough to actually get a device.
One way to deal with this would be to set up the E2E tests to use a mock API for tests that need to win a device. If we don't already have tests that are meant to fail to get a device or meant to overbid, we should consider adding those; they can still hit the live API because we should be able to reliably under- or over-bid on purpose.
Originally posted by @displague in #501 (comment)
The text was updated successfully, but these errors were encountered: