New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add publishing smoke test #828
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, nice work. I've added a couple of questions inline.
It would be great if we could figure out how to test that publishing a draft works. At the moment we're testing the draft stack, and a full publish would reveal problems with the live stack: content-store, router-api, router.
That makes sense - was that your original intention when you wrote the card? I may have interpreted the output slightly differently, but very happy to amend to a) publish the draft and b) then remove what has been published. |
Yeah, sorry the requirements were not clear. Would be good to publish the piece of content and also enable smokey to run in parallel. |
e8c729f
to
fd0e410
Compare
This has now been done |
|
||
def publication_url | ||
@publication_url ||= "#{ENV['GOVUK_WEBSITE_ROOT']}#{publication_path}" | ||
@publication_url |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can delete this line as the line above will return the same thing. Same for @publication_path
and @publication_title
above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, thanks!
|
||
wait_until do | ||
visit page.current_url | ||
page.has_no_text?(:all, /not found/i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use the status code of 200
instead? I think this'll be less likely to change than the text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I didn't go for the status code was because I received the following when I used it:
Capybara::Driver::Base#status_code (Capybara::NotSupportedByDriverError)
We do use the status_code
method elsewhere in the smoke tests, however none of these paths are currently being executed by Concourse; I imagine that if they were, we would be receiving the same errors.
I do agree that we should be using something less brittle than content to determine success or failure - status code would be ideal, but might take some more investigation to figure out why it's not accessible.
Saying that, after doing some research it turns out that Selenium categorise the checking of status codes as part of tests in their worst practices section, instead arguing that you should be more concerned with the contents of the page and/or the preceding steps. There also isn't any option to call status_code
in the Selenium web driver as the method doesn't exist.
There are some alternatives to get status codes working, such as adding a gem specifically for this, but the recommended approach is to capture status codes through the use of a proxy. We already use BrowserMob Proxy, so I have followed some existing art elsewhere to get this set up, adding a new get_status_code
method in smokey_steps.rb.
Once we start to run the other smoke tests that rely on status codes (for the currently absent apps), we may discover that we need to change them over, especially lines 185-186 within smokey_steps.rb:
actual_status = page.status_code.to_i
url = page.current_url
I haven't done that in this PR as we will need to refactor the tests using this code so that they pass in a block that contains the visit
so that the HTTP archive (HAR) file can be populated, so I feel it's better to postpone this until we're ready to add in those absent apps to our stack and check that everything is running correctly at that point in time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work. It's a bit tightly coupled to the text of the pages under test, but that's pretty common for our smoke tests. There's also a risk we leave docs published in environments under test if unpublishing fails for some reason.
It might be nice to ensure that smokey can't publish in production, even if the not @notproduction
flag is omitted. It's quite easy for one to forget to use this flag and point smokey at prod.
Perhaps there's a signon permission we can give to smokey in environments under test, but not in prod? Of course, we'd want failures due to this reason to be clear in error messages.
85e7e89
to
18e2c94
Compare
Thanks for this comment, I missed it when working on the other changes last week. That's a fair point about ensuring that Smokey can't publish in production, even it the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⭐ Nice work
18e2c94
to
13259d2
Compare
Given the fact that we currently don't have the environment label available in our |
This commit adds a smoke test for Publisher to test the entire journey from creating a new piece of content through to unpublishing. Specifically, three separate scenarios create a new draft `Place` on the draft stack, publish this draft to the live stack, before finally proceeding to unpublish the content. One of the requirements this smoke test needs to fulfil is to ensure that multiple runs of Smokey can operate in parallel, which is important as it means that we need to dynamically generate the slug we're specifying for the content we're creating (done using `SecureRandom.hex`). Another complication has been that because we're using separate scenarios to encapsulate the main steps of the test (create draft, publish to live, unpublish), we have had to introduce state which can be shared across these scenarios. Specifically, the state being shared is the unique piece of content that we have created in any one test run (`Smokey test draft place #{SecureRandom.hex}`), so that each scenario of the test can work from where the previous scenario left off. This can be seen by the global variable `$smokey_run_id` in the new `Given` block `I have the smokey run identifier`. I'm aware that [chaining scenarios together can make them difficult to maintain](https://specflow.org/challenges/chain-of-dependent-scenarios), however having each of these steps as a separate scenarios feels to me to be the most sensible approach, whilst the single global variable could also be of use to future tests which need to satisfy a similar requirement. This test will not run on production as it has been given the tag `@notproduction` (where the Smokey run for each environment by Concourse will exclude tags matching `@not<environment>`).
13259d2
to
99a4808
Compare
Originally added in [^1].* These tests have several major issues and add little value: - They are significantly more flakey than all other Smokey tests, generating between 20% and 40% of all new failures each day. - Unlike all other tests, they can only run in Integration as they make visible changes to GOV.UK. This creates a confusing precedent. - They were added in response to an incident action [^2][^3], but the action calls for tests in Production, where these cannot run. - Unlike all other tests, they are order-dependent: the draft must be created before it's published and then unpublished. We should have end-to-end tests of the publishing side of GOV.UK, but we need those tests to be reliable, which these are not. This is a hard problem to solve and merits more time than we had when we put these in, so we can solve it well - ideally in Production. It's unlikely we'll be able to make these tests more reliable in the near future. Leaving them in also poses a risk as they set a number of precedents that we don't want other tests to copy, such as order dependence and only running in Integration. On balance, these tests create more issues than the value they add. This commit removes them and their associated steps. *Note that the card these were added against is very different to the one directly linked to the incident action [^2]. It looks like there was some misunderstanding at the planning stage of this work. [^1]: #828 [^2]: https://trello.com/c/XbzUB6l9/179-prober-and-alert-for-basic-end-to-end-publishing-journey [^3]: https://docs.google.com/document/d/12tTb4o1Vn8qF-htppPP2qmei8UHQL9cDZ0DG0YWRZFA/edit#heading=h.p3b9bm9twi19
This PR adds a smoke test for Publisher which creates a new draft
Place
and ensures it is then visible on the draft stack, before then proceeding to delete the draft.This test will not run on existing production and will not run on the new replatformed production when it's ready, as it has been given the tag
@notproduction
(where the Smokey run for each environment by Concourse will exclude tags matching@not<environment>
).Trello: https://trello.com/c/9QipS0X0