Skip to content

Latest commit

 

History

History
119 lines (99 loc) · 10.2 KB

6 Tool Support for Testing.md

File metadata and controls

119 lines (99 loc) · 10.2 KB

6 Tool Support for Testing

6.1 Test Tool Considerations

6.1.1 Test Tool Classification

Test tools can have one or more of the following purposes depending on the context:

  • Improve the efficiency of test activities by automating repetitive tasks or tasks that require significant resources when done manually (e.g., test execution, regression testing)
  • Improve the efficiency of test activities by supporting manual test activities throughout the test process (see section 1.4)
  • Improve the quality of test activities by allowing for more consistent testing and a higher level of defect reproducibility
  • Automate activities that cannot be executed manually (e.g., large scale performance testing)
  • Increase reliability of testing (e.g., by automating large data comparisons or simulating behavior)

Some types of test tools can be intrusive, which means that they may affect the actual outcome of the test. For example, the actual response times for an application may be different due to the extra instructions that are executed by a performance testing tool, or the amount of code coverage achieved may be distorted due to the use of a coverage tool. The consequence of using intrusive tools is called the probe effect.

**Tool support for management of testing and testware ** Management tools may apply to any test activities over the entire software development lifecycle. Examples:

  • Test management tools and application lifecycle management tools (ALM)
  • Requirements management tools (e.g., traceability to test objects)
  • Defect management tools

**Configuration management tools **

  • Continuous integration tools (D)

**Tool support for static testing **

  • Static analysis tools (D)

**Tool support for test design and implementation ** Test design tools aid in the creation of maintainable work products in test design and implementation, including test cases, test procedures and test data. Examples of such tools include:

  • Model-Based testing tools
  • Test data preparation tools

In some cases, tools that support test design and implementation may also support test execution and logging, or provide their outputs directly to other tools that support test execution and logging.

Tool support for test execution and logging

  • Test execution tools (e.g., to run regression tests)
  • Coverage tools (e.g., requirements coverage, code coverage (D))
  • Test harnesses (D)

**Tool support for performance measurement and dynamic analysis ** Performance measurement and dynamic analysis tools are essential in supporting performance and load testing activities, as these activities cannot effectively be done manually. Examples of these tools include:

  • Performance testing tools
  • Dynamic analysis tools (D)

6.1.2 Benefits and Risks of Test Automation

Potential benefits of using tools to support test execution include:

  • Reduction in repetitive manual work (e.g., running regression tests, environment set up/tear down tasks, re-entering the same test data, and checking against coding standards), thus saving time
  • Greater consistency and repeatability (e.g., test data is created in a coherent manner, tests are executed by a tool in the same order with the same frequency, and tests are consistently derived from requirements)
  • More objective assessment (e.g., static measures, coverage)
  • Easier access to information about testing (e.g., statistics and graphs about test progress, defect rates and performance)

Potential risks of using tools to support testing include:

  • Expectations for the tool may be unrealistic (including functionality and ease of use)
  • The time, cost and effort for the initial introduction of a tool may be under-estimated (including training and external expertise)
  • The time and effort needed to achieve significant and continuing benefits from the tool may be under-estimated (including the need for changes in the test process and continuous improvement in the way the tool is used)
  • The effort required to maintain the test work products generated by the tool may be under-estimated
  • The tool may be relied on too much (seen as a replacement for test design or execution, or the use of automated testing where manual testing would be better)
  • Version control of test work products may be neglected
  • Relationships and interoperability issues between critical tools may be neglected, such as
    requirements management tools, configuration management tools, defect management tools and tools from multiple vendors
  • The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
  • The vendor may provide a poor response for support, upgrades, and defect fixes
  • An open source project may be suspended
  • A new platform or technology may not be supported by the tool
  • There may be no clear ownership of the tool (e.g., for mentoring, updates, etc.)

6.1.3 Special Considerations for Test Execution and Test Management Tools

Test execution tools
Test execution tools execute test objects using automated test scripts. This type of tools often requires significant effort in order to achieve significant benefits.

  • Capturing test approach: Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale to large numbers of test scripts. A captured script is a linear representation with specific data and actions as part of each script. This type of script may be unstable when unexpected events occur, and require ongoing maintenance as the system’s user interface evolves over time

  • Data-driven test approach: This test approach separates out the test inputs and expected
    results, usually into a spreadsheet, and uses a more generic test script that can read the input data and execute the same test script with different data.

  • Keyword-driven test approach: This test approach, a generic script processes keywords
    describing the actions to be taken (also called action words), which then calls keyword scripts to process the associated test data.

Model-Based testing (MBT) tools enable a functional specification to be captured in the form of a model, such as an activity diagram. This task is generally performed by a system designer. The MBT tool interprets the model in order to create test case specifications which can then be saved in a test management tool and/or executed by a test execution tool.

Test management tools
Test management tools often need to interface with other tools or spreadsheets for various reasons, including:

  • To produce useful information in a format that fits the needs of the organization
  • To maintain consistent traceability to requirements in a requirements management tool
  • To link with test object version information in the configuration management tool

This is particularly important to consider when using an integrated tool (e.g., Application Lifecycle Management), which includes a test management module, as well as other modules (e.g., project schedule and budget information) that are used by different groups within an organization.

6.2 Effective Use of Tools

6.2.1 Main Principles for Tool Selection

The main considerations in selecting a tool for an organization include:

  • Assessment of the maturity of the own organization, its strengths and weaknesses
  • Identification of opportunities for an improved test process supported by tools
  • Understanding of the technologies used by the test object(s), in order to select a tool that is compatible with that technology
  • Understanding the build and continuous integration tools already in use within the organization, in order to ensure tool compatibility and integration
  • Evaluation of the tool against clear requirements and objective criteria
  • Consideration of whether or not the tool is available for a free trial period (and for how long)
  • Evaluation of the vendor (including training, support and commercial aspects) or support for non-commercial (e.g., open source) tools
  • Identification of internal requirements for coaching and mentoring in the use of the tool
  • Evaluation of training needs, considering the testing (and test automation) skills of those who will be working directly with the tool(s)
  • Consideration of pros and cons of various licensing models (e.g., commercial or open source)
  • Estimation of a cost-benefit ratio based on a concrete business case (if required)

As a final step, a proof-of-concept evaluation should be done to establish whether the tool performs effectively with the software under test and within the current infrastructure or, if necessary, to identify changes needed to that infrastructure to use the tool effectively.

6.2.2 Pilot Projects for Introducing a Tool into an Organization

After completing the tool selection and a successful proof-of-concept, introducing the selected tool into an organization generally starts with a pilot project, which has the following objectives:

  • Gaining in-depth knowledge about the tool, understanding both its strengths and weaknesses
  • Evaluating how the tool fits with existing processes and practices, and determining what would need to change
  • Deciding on standard ways of using, managing, storing, and maintaining the tool and the test work products (e.g., deciding on naming conventions for files and tests, selecting coding standards, creating libraries and defining the modularity of test suites)
  • Assessing whether the benefits will be achieved at reasonable cost
  • Understanding the metrics that you wish the tool to collect and report, and configuring the tool to ensure these metrics can be captured and reported

**6.2.3 Success Factors for Tools ** Success factors for evaluation, implementation, deployment, and on-going support of tools within an organization include:

  • Rolling out the tool to the rest of the organization incrementally
  • Adapting and improving processes to fit with the use of the tool
  • Providing training, coaching, and mentoring for tool users
  • Defining guidelines for the use of the tool (e.g., internal standards for automation)
  • Implementing a way to gather usage information from the actual use of the tool
  • Monitoring tool use and benefits
  • Providing support to the users of a given tool
  • Gathering lessons learned from all users