Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test cases fail randomely because of missing elements #750

Open
1 task done
aqib008 opened this issue Oct 23, 2023 · 3 comments
Open
1 task done

Test cases fail randomely because of missing elements #750

aqib008 opened this issue Oct 23, 2023 · 3 comments

Comments

@aqib008
Copy link

aqib008 commented Oct 23, 2023

Elixir and Erlang/OTP versions

Elixir: 1.14.3
Erlang: 25.3

Operating system

Mac

Browser

Chrome

Driver

Chromedriver

Correct Configuration

  • I confirm that I have Wallaby configured correctly.

Current behavior

There are 2000 + test cases we are using wallaby for them, we are it to test live views. Whenever we run test cases, random test cases get failed because of missing elements or elements count don't match. Some times they get passed locally but get failed on pipeline (github actions), there we see same behaviour (randome failing)

Expected behavior

Expected behaviour is we need consistency, if test case has wrong implementation then it must get failed every time and if test cases has correct implementation then it gets passed each time. We are facing inconsistency.

Test Code & HTML

 feature("cancels if client hold expires", %{
      session: session,
      stripe_payment_intent: intent,
      photo_ids: photo_ids
    }) do
      session
      |> place_order(photo_ids)
      |> trigger_stripe_webhook(:connect, "payment_intent.canceled", %{
        intent
        | status: "canceled"
      })
      |> click(link("My orders"))
      |> assert_text("Order Canceled")
      |> click(link("View details"))
      |> assert_text("Order Canceled")
end

This test case sometime fails at |> assert_text("Order Canceled")

Demonstration Project

No response

@WillRochaThomas
Copy link

We see the same thing in our codebase. I don't think it's easy to help you though without more details (the exact error message, OS/memory/processor specs of your machines running the tests, configuration you're using for max-cases and max_wait_time, chrome and chromedriver versions).

One suggestion from me is to enable Screenshots. For us, this has shown it's always the case that the next page hasn't loaded in time (the screenshots often show the browser was still displaying the previous page that had the button/link clicked on, hence the missing element or text). My thoughts have been this is probably not a Wallaby issue, given it's a fairly common challenge writing reliable browser tests at all. They are just slower, have more moving parts, and more prone to this kind of behaviour.

My plans were to experiment with increasing our timeouts and reducing the number of parallel tests being run, to try and get to a point of more stability, and that might be the best option for you too. When running tests locally with --max-cases=1 and a max_wait_time of 7000, it's much less common for us to see random failures (although it is still happening....and that seems strange with such a long timeout). Our CI runs with --max-cases 8 and a max_wait_time of 10000 and failures are much more frequent.

I'd welcome other suggestions of how to debug and tune things. I am surprised that I have seen failures locally with the 7 second timeout and 1 test case running at a time. I am running with a 2015 MacBook Pro though, which has a 2.2 GHz Quad-Core Intel Core i7 processor and 16 GB 1600 MHz DDR3 RAM. I'll hopefully be upgrading my machine soon!

@cjbottaro
Copy link

cjbottaro commented Nov 13, 2023

My plans were to experiment with increasing our timeouts

I'm confused about what uses timeouts and what doesn't, and what's the best way to wait on some dom.

This code was flakey for us:

|> visit("/users/sign_in")
|> fill_in(text_field("user[email]"), with: user.email)
|> fill_in(text_field("user[password]"), with: user.password)
|> click(button("Sign In"))

When it failed, Wallaby would raise an error from click/1 saying something like "can't find button, most likely the dom isn't ready, using find/2,3 will probably fix it."

Changing the code to this makes the test not flakey:

|> visit("/users/sign_in")
|> find(button("Sign In"), fn _ -> nil end)
|> fill_in(text_field("user[email]"), with: user.email)
|> fill_in(text_field("user[password]"), with: user.password)
|> click(button("Sign In"))

Using find/3 with an empty function feels wonky though; why isn't there a wait_for/2 function? Or why not every function that relies on some dom go through the timeout/retry process?

@nathanl
Copy link
Contributor

nathanl commented Jan 5, 2024

Not because of flakiness, but to wait on external hardware to do something that should cause the UI to update, I just wrote this little function:

  @doc """
  In a Wallaby test, wait up to the specified number of milliseconds for the
  specified element to be present.
  Note that Wallaby has its own timeout (apparently around 3 seconds) when
  checking for an element, so that much delay is built in every time this
  function is called or recurses.
  """
  @spec await_element_for_ms(
          parent :: Wallaby.Browser.parent(),
          query :: Wallaby.Query.t(),
          ms :: integer()
        ) :: Wallaby.Browser.parent()
  def await_element_for_ms(parent, _selector, ms) when is_integer(ms) and ms <= 0 do
    # Continue test
    parent
  end

  def await_element_for_ms(parent, query, ms) when is_integer(ms) do
    started_at = System.monotonic_time(:millisecond)
    # This appears to take about 3 seconds
    present? = Wallaby.Browser.has?(parent, query)

    if present? do
      # Continue test
      parent
    else
      ended_at = System.monotonic_time(:millisecond)
      elapsed = ended_at - started_at
      await_element_for_ms(parent, query, ms - elapsed)
    end
  end

Maybe that would be useful for y'all?

If the maintainers would like to put this into Wallaby, feel free to use or adapt this code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants