Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exploratory tester #29

Open
dialex opened this issue Jan 11, 2018 · 5 comments
Open

Exploratory tester #29

dialex opened this issue Jan 11, 2018 · 5 comments

Comments

@dialex
Copy link
Owner

dialex commented Jan 11, 2018

Personalities

@dialex dialex added this to the v0.4-Roles milestone Jan 11, 2018
@dialex dialex added this to Untriaged in Writing via automation Jan 11, 2018
@dialex
Copy link
Owner Author

dialex commented Jan 11, 2018

img_20170322_131043053

@dialex
Copy link
Owner Author

dialex commented Mar 11, 2018

A list of requirements is never really complete, there will always be requirements which are not stated, which are assumed, or omitted. Regardless of how comprehensive your requirements are, they will never be an exhaustive list. You won’t know everything the software will do up front. That’s where exploratory testing comes in.

Exploratory testing is defined as simultaneous learning, test design and execution [2]. The tester explores the application, discovering new information, learning, and finding new things to test as they go. They could do this alone, or pair with another tester, or a developer perhaps.

Software testing shouldn’t be perceived only as a task where the tester works through a list of pre prepared tests or test cases giving a firm pass or fail result. If you have a user story, or set of requirements, it is of course important to make sure what you are testing adheres to those things, however it can be helpful to reframe acceptance criteria as ‘rejection criteria’. When the acceptance criteria are not met, the product is not acceptable, but if they are met, that doesn’t mean the product has no issues.

Checking and verifying should be combined with exploration and investigation, asking questions of the product like ‘What happens if…’ that you may not know the answers to before you start, and that test cases written in advance may not cover.

https://dojo.ministryoftesting.com/dojo/lessons/so-what-is-software-testing

A test script will check if what was expected and known to be true, still is.

@dialex
Copy link
Owner Author

dialex commented Aug 17, 2019

One of the things I’ve noticed over the years is that anyone is capable of doing Exploratory Testing. Anyone at all. It just happens some do it better than others. Some want to do it. Some don’t want to do it. Some don’t realise they are doing it. Some don’t know what to label it.

Have you ever opened a new electronic device and explored around what it can do? Or tried to get something working without reading the instructions?

In our testing world though I’ve observed a great many people attaching a stigma to Exploratory Testing; it’s often deemed as inferior, or something to do rarely, or it’s less important than specification based scripted testing. I think much of this stigma or resistance to exploration comes about from many testers feeling (or believing) Exploratory Testing (ET) is unstructured or random.

The whole section on "Experienced Exploratory Testers"

I believe that the more advanced a practitioner becomes in Exploratory Testing the more they are able to structure that exploration, but more importantly to me, the more they are able to explain to themselves and others what they plan to do, are doing and have done.
(...)
It’s this notetaking (or other capture mechanism) that not only allows them to do good exploratory testing but also to do good explanations of that testing to others.

Good exploratory testing is searchable, auditable, insightful and can adhere to many compliance regulations. Good exploratory testing should be trusted.

Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place.

http://thesocialtester.co.uk/explaining-exploratory-testing-relies-on-good-notes/

@dialex
Copy link
Owner Author

dialex commented Jan 27, 2020

@dialex
Copy link
Owner Author

dialex commented Apr 24, 2020

At the end, all the acceptance tests (and unit tests) are passing. There is no hand-off to Testers to make sure the system does what it is supposed to. The acceptance tests already prove that the system is working (according to spec).

This does not mean that Testers do not put their hands on the keyboard and their eyes on the screen. They do! (...) They perform exploratory testing. They get creative. They do what Testers are really good at—they find new and interesting ways to break the system. They uncover under-specified areas of the system.

So, in short, the business specifies the system with automated acceptance tests. Programmers run those tests to see what unit tests need to be written. The unit tests force them to write production code that passes both tests. In the end, all the tests pass. In the middle of the iteration, QA changes from writing automated tests, to exploratory testing.

-- https://sites.google.com/site/unclebobconsultingllc/tdd-with-acceptance-tests-and-unit-tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Untriaged
Writing
  
Untriaged
Development

No branches or pull requests

1 participant