Skip to content

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.

lucgagan/software-testing-glossary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 

Repository files navigation

Software Testing Glossary

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.

Raise a PR to make contribute changes. Modifications will be submitted back upstream to https://ray.run/glossary.

A/B testing involves creating one or more variants of a webpage to compare against the current version. The goal is to determine which version performs best based on specific metrics, such as revenue per visitor or conversion rate.

Acceptance Test Driven Development (ATDD) is a development approach aimed at reducing defects by integrating testing as a core component of the development process. This ensures that the application meets quality standards.

Acceptance testing is conducted by potential end-users or customers to determine if the software meets the required specifications and is suitable for its intended use.

Accessibility testing ensures mobile and web applications are usable by everyone, including individuals with disabilities such as visual or hearing impairments, and other physical or cognitive challenges.

The actual result is the outcome obtained after a test is conducted. During the testing phase, the actual result is documented alongside the test case. After all tests, it's compared with the expected outcome, noting any discrepancies.

Ad hoc testing is an informal, spontaneous approach to software testing. Its main objective is to identify vulnerabilities or issues as quickly as possible. This method is unstructured, conducted without detailed planning or documentation.

Agile software development is an iterative method where requirements and solutions are collaboratively developed by cross-functional teams. It emphasizes adaptability and responsiveness over rigid planning.

Agile testing aligns with the principles of Agile software development. Unlike traditional approaches, testing starts at the project's outset with development and testing occurring simultaneously. This close collaboration ensures tasks are accomplished efficiently.

Alpha testing aims to identify bugs before the product reaches the end-users. Conducted late in the development process but before beta testing, it helps ensure that the product is free from major issues.

Analytical test strategies involve analyzing the test basis before executing the test. This strategy helps pinpoint potential problems early on, ensuring a more effective testing process.

An Application Programming Interface (API) is a set of rules allowing two applications to communicate. The term "Application" in this context denotes any software with a specific function. The API defines how these applications send and receive requests and responses.

API testing involves verifying and validating an API's performance, functionality, reliability, and security. The process includes sending requests to the API and analyzing its responses to ensure they meet expected outcomes. This testing can be done manually or using automated tools, helping identify issues like invalid inputs, poor error handling, and unauthorized access.

The American Software Testing Qualifications Board (ASTQB) is the U.S. national board for the International Software Testing Qualifications Board (ISTQB). It's responsible for the "ISTQB Certified Tester" program in the U.S. The ASTQB provides and manages certifications, accredits trainers, and approves training materials for software testing professionals. Earning an ISTQB certification through ASTQB ensures that an individual meets internationally recognized standards in software testing. Additionally, ASTQB promotes the value and importance of software testing as a profession within the U.S. software development and IT industries.

Automated testing uses scripts to conduct repetitive tasks, increasing software performance and testing efficiency. It enhances test coverage and execution speed, making the software testing process more effective.

Availability Testing, in the context of software testing, refers to evaluating a system's uptime, ensuring that the application or system remains accessible and operational to users as intended. The primary goal of this testing is to guarantee that the software meets its defined availability criteria and provides a reliable service without prolonged interruptions. This kind of testing often considers scenarios like system failures, maintenance, peak user loads, and network outages, and aims to determine the system's overall reliability and readiness for production deployment. Availability Testing is crucial for applications where continuous accessibility is paramount, such as e-commerce platforms, banking systems, and critical infrastructure services.

Back-to-back testing compares the results of two or more similar-functioning components to check for differences in their outputs.

Backward Compatibility, in the context of software testing, refers to the ability of a software application or system to effectively function with earlier versions of itself or interface correctly with older input data formats, configurations, or hardware. In essence, when a software product is backward compatible, it ensures that users employing older versions won't encounter unexpected issues or malfunctions when interfacing with the newer iteration. Testing for backward compatibility is crucial during software upgrades or releases to make certain that changes introduced do not negatively impact existing users or break established functionalities. This practice prioritizes the user experience, ensuring seamless transitions and interactions between software generations.

Baseline Testing is a type of non-functional testing where the performance or characteristics of a system or application are measured under specific conditions. This initial measurement serves as a "baseline" or benchmark against which future performance levels can be compared. The primary goal of baseline testing is to understand the current behavior of the system and set a standard for subsequent testing phases. Any deviations in future tests from this baseline can indicate performance issues, regressions, or other anomalies that might need addressing.

BDD (Behavior-Driven Development) is an agile software development approach that emphasizes collaboration between developers, testers, and domain experts. It focuses on understanding and defining the desired behavior of a system from the user's perspective. BDD encourages the use of simple, plain-language descriptions of software behavior, often structured as "Given-When-Then" scenarios. These descriptions serve as both requirements documentation and a basis for automated tests, ensuring that software development is aligned with user needs and expectations.

Beta testing is the final testing phase before product release, where a near-complete version is provided to a select group of end-users. It aims to gather feedback on various aspects of the software, ensuring it meets user expectations.

Big Bang Testing is an approach where all system units are linked simultaneously without regard for their interactions. This method can make error isolation challenging as it requires focusing on the interfaces of individual units.

Black box testing assesses software without considering its internal workings. Typically focused on functional or acceptance testing, it can be done by anyone, regardless of their familiarity with the codebase.

Bottom-up integration testing starts by testing lower-level modules first, then integrates and tests them with higher-level ones. During this process, "Drivers" may be used to assist in testing.

Evaluates software by focusing on the boundary or edge values of the input domain.

BrowserStack is a cloud-based web and mobile testing platform that allows developers and testers to view and interact with their websites and applications across multiple browsers, operating systems, and real mobile devices without the need for an internal lab of virtual machines or devices. It provides instant access to a wide range of browser and OS combinations, ensuring that developers can test their products in real-world conditions. This helps in identifying and resolving compatibility issues that might not be evident on a single platform or browser. BrowserStack is particularly beneficial for ensuring cross-browser and cross-platform compatibility, and it integrates with many popular continuous integration tools to streamline the testing process.

BS 7925-2 is a standard for Software Component Testing. It outlines a process for component testing using specific test designs and measurement techniques, aiming to enhance the quality of both testing and software products.

A bug is an error or fault in a program that leads to incorrect outcomes or crashes. It arises from flawed or incomplete logic and can cause the software to deviate from its expected performance.

Build Verification Testing (BVT) is a set of preliminary tests performed on a newly built software product to ensure its basic functionality before it undergoes more in-depth testing. The primary goal of BVT is to quickly identify any major issues or showstoppers that might render the software unusable. If the build fails this testing phase, it's considered unstable, and detailed testing is typically postponed until the severe defects are addressed. BVT acts as a quality gate, ensuring that only builds meeting a certain quality threshold move forward in the testing lifecycle, thus saving time and resources on later-stage testing of flawed builds.

Canary Testing is a technique used to detect issues by gradually releasing changes or updates to a subset of users. Often paired with A/B testing, it enables developers to evaluate and refine features based on feedback before a full release.

Chai.js, often simply referred to as Chai, is a BDD/TDD (Behavior-Driven Development/Test-Driven Development) assertion library for Node.js and browsers. It pairs seamlessly with popular JavaScript testing frameworks, such as Mocha and Jasmine. Chai provides developers with the capability to express assertions in a readable language, mimicking natural language constructions.

Change Control, in the context of software testing, refers to a formal process used to ensure that modifications or updates to a software product or system are introduced in a controlled and coordinated manner. It involves documenting, evaluating, approving, and overseeing any changes made to the software, its environment, or associated documents during and after the development process.

Change requests originate from stakeholders wishing to alter a product or its development method. They can range from defect reports to requests for new features or enhancements.

Chaos engineering tests a software's resilience by introducing random faults and disruptions. This method challenges applications in unpredictable ways, aiming to uncover unanticipated flaws and weaknesses.

A clean slate refers to the practice of resetting a system, application, or environment to its original or default state before conducting a test or evaluation. In the context of software testing, a clean slate ensures that tests are performed under consistent and repeatable conditions, devoid of any prior residues or configurations that might influence the outcome. For instance, when testing a web application, using a fresh and cache-cleared web browser ensures that no previously stored data or settings interfere with the current test session. This approach minimizes variables and helps in achieving accurate and reliable test results.

The Capability Maturity Model Integration (CMMI) is a collection of best practices in engineering, service delivery, and management. It aids organizations in enhancing their delivery capabilities, ensuring customer satisfaction through continuous improvement.

Code coverage measures the extent of code that has been tested, assisting in evaluating the test suite's quality. It identifies areas not executed during testing and is a form of white box testing.

Assesses software performance in specific hardware, software, OS, or network conditions.

Measures the system's performance under simultaneous or multi-user loads.

Examines the paths that a program takes during its execution flow.

Ensures web applications function correctly across various web browsers.

Cypress is an end-to-end testing framework designed for modern web applications. Unlike many other testing solutions, Cypress operates directly within the web browser, ensuring more consistent and accurate real-world testing scenarios. It provides a rich set of features and tools for writing tests, debugging in real time, and capturing screenshots or video recordings of test runs. Cypress supports both unit testing and full end-to-end testing, making it a versatile choice for developers and QA professionals. One of its notable features is its interactive test runner that allows developers to see commands as they execute while also viewing the application under test. Built on top of technologies like Mocha, Chai, and Sinon, Cypress offers a comprehensive and user-friendly environment for web application testing.

A database is an organized collection of structured information or data, typically stored electronically in a computer system. It is designed to store, retrieve, and manage data efficiently and securely. Databases allow users to access data in various ways, from simple queries to complex transactions. They can be classified based on their data model, such as relational, document-based, key-value, and graph databases. A relational database, one of the most common types, organizes data into tables with rows and columns. Databases are integral to numerous applications and systems, from websites to banking software, ensuring data integrity, availability, and consistency. They are managed using database management systems (DBMS), which provide tools and interfaces for interacting with the stored data.

Centers on the variables and their values during computations and storage.

Decision Table Testing is a black-box software testing technique used to determine the test scenarios for complex business logic. It involves representing conditions and their respective outcomes in a tabular form, simplifying the logic by highlighting every possible combination. Each row in the decision table represents a unique combination of conditions, leading to specific actions or outcomes, ensuring that all possible scenarios are considered. This method is especially useful when dealing with systems that have various input combinations and corresponding outputs, as it helps in systematically identifying and covering all possible test cases, reducing the risk of missed scenarios.

Defect Management, in software testing, refers to the systematic process of identifying, recording, tracking, and resolving defects or bugs detected in a software application. It encompasses the entire lifecycle of a defect, from its discovery to closure, ensuring that issues are appropriately addressed and resolved before the software's release.

Dependency Testing in software testing refers to the process of examining the interactions and dependencies between different software modules or components to ensure they interact correctly. This type of testing focuses on identifying issues that might arise when one component relies on another to function properly.

Documentation Testing in software testing refers to the process of evaluating and verifying the quality, completeness, and accuracy of documentation associated with software products. This can include user manuals, help guides, installation instructions, API documentation, and more. The primary goal is to ensure that the documentation provides clear, consistent, and correct information to its intended audience, be it end-users, administrators, or developers. Inaccuracies or ambiguities in documentation can lead to user frustration, incorrect usage of the software, or even system failures. By conducting documentation testing, organizations aim to provide a seamless user experience, reduce support costs, and enhance the overall usability and understanding of the software product.

Dynamic Testing, in the context of software testing, refers to the process of evaluating a software application or system through its execution. Unlike static testing, where code is analyzed without being executed, dynamic testing involves running the software to observe its behavior and identify potential defects. This form of testing checks the software's actual functionality and performance under various conditions. Common types of dynamic testing include unit testing, integration testing, system testing, and acceptance testing. The primary objective is to ensure that the software behaves as expected and meets its requirements when it is in operation.

Edge Testing, often confused with "boundary testing," is a testing technique used to identify problems that might occur at the extreme operating parameters, often referred to as the "edges" of the software's capability or limits. It focuses on testing the system's performance or behavior at or near its capacity limits or operational extremes. For instance, if a software claims to support up to 1,000 concurrent users, edge testing would involve testing the system with close to, if not exactly, 1,000 users to observe its behavior. The goal is to ensure the system operates reliably at its boundaries and to uncover potential issues that arise only under extreme conditions.

Tests the complete functionality of an application process from start to finish.

Endurance Testing, in the context of software, is a type of performance testing where the system is subjected to a consistent workload or stress for an extended period. The primary goal of endurance testing is to identify how the system behaves under sustained use and to uncover potential issues like memory leaks, resource depletion, or performance degradation that might manifest only after prolonged operation. By simulating a real-world long-running environment, endurance testing helps ensure that the software remains stable, reliable, and efficient over time, free from slowdowns or crashes that could result from extended usage.

In software testing, Entry Criteria refer to the set of predefined conditions or requirements that must be met before a particular test phase can begin. These conditions ensure that testing is conducted in a structured manner and that the process is initiated only when the prerequisites are in place. Entry Criteria can encompass various aspects, such as the availability of the test environment, the readiness of test tools and test data, the completion of previous phases, or the sign-off of certain documents. Establishing clear Entry Criteria helps in avoiding premature testing, ensuring that resources are utilized efficiently, and maintaining the quality and effectiveness of the testing process.

Equivalence Partitioning is a software testing technique used to reduce the number of test cases by dividing the input data of a software unit into partitions of equivalent data. Instead of testing every possible input, equivalence partitioning proposes that test cases can be designed for representative values from each partition. The underlying principle is that if the software behaves correctly for one value in a partition, it will behave correctly for all other values in the same partition, and vice versa.

Error Guessing is a software testing technique where the tester, relying on their experience, intuition, and knowledge of the system, tries to predict where defects might occur. Instead of following a systematic testing approach or predefined test cases, testers make educated guesses to identify potential problem areas or scenarios where the software might fail. The technique is based on the tester's familiarity with common errors, past defects, or specific system behavior. Error guessing is often used as a supplementary testing method, complementing more structured techniques, and is particularly effective in identifying unique or unanticipated issues.

The anticipated outcome when a specific test case runs.

Exploratory testing is a dynamic process where test design and test execution happen simultaneously. It leverages the tester's experience and is especially useful under tight time constraints.

Extreme Programming (XP) is an agile software development method. Unlike Scrum which targets project management, XP emphasizes software development best practices.

Failover Testing is a specific type of testing that evaluates a system's ability to automatically transfer control to a backup system or component when a failure occurs. The primary objective of failover testing is to ensure that, in the event of system or component malfunction, the failover process happens seamlessly without data loss or significant downtime. This test helps in validating the system's high availability and fault tolerance capabilities, ensuring that mission-critical applications remain operational even under unplanned adverse conditions. Failover testing is crucial for systems that require high availability, such as financial transaction systems, healthcare applications, and data centers.

In software testing, a False Negative refers to a situation where a test fails to identify a defect or issue that is actually present in the system. In other words, the test incorrectly indicates that the software is functioning correctly when, in reality, there's a fault or bug. False negatives can give a false sense of security, leading teams to believe the software is of higher quality than it actually is. This type of error is particularly concerning because it might allow critical defects to go unnoticed and reach the production environment, potentially resulting in undesired consequences for users or businesses.

In software testing, a False Positive refers to a situation where a test incorrectly identifies a defect or issue in the software when, in reality, there isn't one. Essentially, it's a test indicating a problem where none exists. False positives can arise due to various reasons, such as incorrect test data, flawed test conditions, or misconfigurations in the testing environment. While they might seem harmless, false positives can be detrimental as they can lead to wasted effort, resources, and time for development teams, potentially diverting attention away from genuine issues. Thus, it's essential to validate and rectify such occurrences to maintain the efficiency and accuracy of the testing process.

Factory Acceptance Testing (FAT) confirms that newly manufactured equipment operates as intended and fulfills the customer's requirements.

Introducing faults deliberately to test the system's robustness.

A flaky test in software testing refers to a test that produces inconsistent results: it might pass on one run and fail on another without any changes to the code, configuration, or environment. The unpredictability of flaky tests can undermine the reliability of a testing suite, making it challenging to determine whether a failure is due to a genuine issue in the software or merely the test's inconsistency. Flaky tests can arise from a range of factors, including timing issues, external dependencies, and non-deterministic factors. Addressing and eliminating flakiness is crucial to maintain trust in a testing process and to ensure that genuine defects are promptly identified and addressed.

Front-end testing focuses on the user interface (UI) and its interactions within an application.

Functional Integration relates products and services to an ecosystem to attract and retain customers.

Functional Requirements define the expected behavior of a software system or application, specifying what the system should do in terms of processes, functionalities, and features. These requirements outline the interactions between the system and its users, as well as any other external systems or interfaces. They serve as a basis for the design, development, and testing phases of the software lifecycle.

Functional testing checks if a software application's functions align with its requirements. It's a type of black box testing, meaning it doesn't involve the application's source code.

Future-proof testing ensures a software application can adapt to future technological changes without extensive modification.

Fuzz Testing is a dynamic software testing technique that involves providing a system with random, malformed, or unexpected input data to identify vulnerabilities and weaknesses. The goal of fuzz testing is to trigger errors, crashes, memory leaks, or other unforeseen behaviors in the software, which can then be analyzed to find potential security threats or software defects. It's especially effective for uncovering issues that might be exploited by malicious attacks, such as buffer overflows or data injection vulnerabilities. Fuzzing is commonly used in security testing and is considered a proactive measure to enhance software robustness and reliability.

Gherkin is a domain-specific language used primarily for behavior-driven development (BDD). It provides a structured and human-readable format to describe and document the desired behavior of software features. Gherkin's syntax uses plain language combined with specific keywords—such as "Given," "When," "Then," "And," and "But"—to define preconditions, actions, and expected outcomes. These Gherkin scenarios can then be utilized as both specifications for the system's behavior and the foundation for automated tests, making it a bridge between non-technical stakeholders and the technical team.

Glass box testing inspects a program's structure and formulates test data based on its logic.

Intense testing of a specific module or feature, often by a tester or developer.

Grey box testing involves testing an application with partial knowledge of its internal workings. It aims to identify issues stemming from the code structure or its application.

The "happy path" refers to the default scenario in which a system or application operates without any errors, exceptions, or unexpected user behavior. It represents the most straightforward and trouble-free journey through a given system or process, resulting in a successful outcome. When testing software, the happy path ensures that the core functionalities work as expected under optimal conditions. However, while it's essential to verify that the happy path operates correctly, comprehensive testing also requires examining edge cases, exceptions, and potential error scenarios to ensure robustness and reliability.

Headless Testing refers to the practice of running browser automation tests without the graphical user interface (GUI) being visible or rendered. In this approach, tests are conducted using a "headless" browser—a browser without a user interface. Since these tests do not require the browser's GUI elements to load visually, they tend to run faster and are particularly useful in environments where display devices, windows, or browsers are unnecessary or unavailable, such as continuous integration pipelines or server environments. Common tools for headless testing include Chrome's headless mode, PhantomJS, and Puppeteer. The primary advantage of headless testing is its efficiency, enabling faster feedback and more frequent test runs.

Relies on experience-based techniques to identify defects.

IEEE 829 is a standard for Software Test Documentation, dictating the structure for documents throughout the testing life cycle.

Impact Analysis, in the context of software testing, refers to the process of identifying and assessing the potential effects of a change in the software. When a code change or a new feature is introduced, it's crucial to understand how this alteration might influence existing functionalities or components. By conducting impact analysis, teams can ensure that modifications don't introduce new defects, make efficient use of testing resources by targeting the affected areas, and reduce the risk of unforeseen issues in the production environment.

Incident Management, in the context of Quality Assurance (QA), refers to the systematic process of identifying, recording, analyzing, tracking, and resolving incidents or anomalies detected during software testing or post-deployment. An incident in QA might be a defect, a bug, a discrepancy in documentation, or any issue that deviates from the expected behavior or standards.

An incident report chronicles observed anomalies, capturing details like summary, steps, priority, and status. It's crucial for tracking and informing relevant stakeholders.

Incremental testing is an integration testing technique that tests program modules post-unit testing. Using stubs and drivers, it isolates and examines each module for defects.

Independent Verification and Validation (IV&V) refers to a specialized process where an external organization or team evaluates the correctness and quality of a software product, independent of the developers and the development process. The primary goal is to ensure that the system meets its specified requirements and functions as intended.

Input Validation Testing is a software testing technique that focuses on verifying the correctness and appropriateness of the data entered into a system. The primary objective is to ensure that the system can gracefully handle invalid, unexpected, or malicious input. By doing so, the system not only maintains its integrity and functions correctly but also safeguards against potential vulnerabilities like SQL injections, cross-site scripting, and other forms of attacks that exploit poorly validated input. Through Input Validation Testing, testers aim to identify weaknesses in input validation mechanisms and ensure that only valid and safe data passes through to the application's processing stages.

Inspection, sometimes referred to as a Fagan inspection, is a peer review process where trained individuals evaluate a work product looking for defects.

Performed after unit testing, integration testing identifies defects when integrated components or units interact.

Interface testing ensures that two software components communicate correctly. Interfaces, including APIs and Web services, connect these components, and their testing is termed Interface Testing.

Internationalization Testing, often abbreviated as "i18n testing," is a software testing process that verifies the adaptability of an application for use in different languages, regions, and cultures. The primary goal of internationalization testing is to ensure that the software's architecture is designed in a way that it can seamlessly handle multiple languages, character sets, and cultural conventions without necessitating changes to its core codebase.

Interoperability Testing is a type of software testing that evaluates the capability of different systems, applications, or components to exchange and utilize information effectively, accurately, and consistently. The primary goal is to ensure that diverse software products and services can work seamlessly together in a given environment, be it within the same organization or across different entities. Interoperability Testing identifies integration issues, incompatibilities, or other hindrances that might prevent systems from interacting as intended. Such testing is crucial in environments where multiple vendors, platforms, or standards coexist and need to cooperate without causing disruptions or data discrepancies.

The International Software Testing Qualifications Board, a non-profit that certifies software testers.

Iterative testing involves periodically updating a product based on previous feedback and then testing the changes against set benchmarks.

Jasmine is an open-source testing framework for JavaScript. It is designed to be behavior-driven, allowing developers to write tests in a way that describes the expected behavior of the software in clear, human-readable terms. Jasmine provides functions to structure your tests, set up preconditions, and define assertions.

Jest is a JavaScript unit testing framework by Meta. It's primarily used for writing unit tests to assess individual code segments.

Jira is a popular software developed by Atlassian, primarily used for bug tracking, issue tracking, and project management. Originating as a tool for software development teams to track defects and tasks, Jira has since evolved to cater to various business functions with customizable workflows, real-time collaboration, and integration capabilities. It allows teams to create, prioritize, and assign work items, such as stories or bugs, and then track their progress through different stages. Jira supports Agile methodologies like Scrum and Kanban, offering features like boards, backlogs, and sprints. Its versatility and extensive plugin ecosystem make it suitable for a wide range of use cases beyond just software development.

JMeter, officially known as Apache JMeter, is an open-source software application developed by the Apache Software Foundation. It is designed for load testing and performance measurement of web applications, but its capabilities extend beyond web protocols. JMeter allows users to simulate multiple users with concurrent threads, create a variety of requests to servers, and analyze the performance of applications under different load conditions.

Features of JMeter include its ability to simulate multiple users with concurrent threads, support for various protocols (including HTTP, FTP, JDBC, and more), and a graphical interface for designing and visualizing test plans. Its extensible nature allows developers and testers to integrate additional plugins or write custom code to enhance its functionality. With JMeter, organizations can validate the scalability, responsiveness, and reliability of their software applications and infrastructure.

JUnit is a Java testing framework enabling developers to craft and run automated tests. Whenever new code is incorporated, tests must be rerun to confirm the code's integrity.

Keyword driven testing is a functional testing approach where test case design is separated from its execution. Keywords represent user actions on test objects, making test cases clearer and more maintainable.

Lighthouse is an open-source, automated tool developed by Google for improving the quality of web pages. It provides audits for performance, accessibility, progressive web apps, SEO, and other aspects of web page quality. By running Lighthouse against a web page, developers and testers can obtain a set of actionable recommendations and insights that help in optimizing the user experience and overall effectiveness of the website.

Load Testing evaluates how a system, software, or application behaves under multiple concurrent users. It mimics real-life conditions to determine system performance.

Localization testing ensures a software product resonates culturally with users in a specific region, guaranteeing its usability in that locale.

Maintainability measures how easily a system can be updated or modified. This attribute is crucial as software undergoes changes throughout its lifecycle.

Maintenance testing helps identify, diagnose, and verify equipment problems, ensuring the effectiveness of repair measures.

Manual testing is the process of manually checking software functionalities against expected outcomes.

Microservices testing evaluates each individual microservice's functionality, ensuring they cohesively function as a unified application and are resilient to individual failures.

Mobile app testing involves verifying a mobile application's functionalities before its public release, ensuring both technical and business requirements are met.

Mobile Device Testing assesses a device's features and qualities, ensuring it fulfills its intended purpose.

Utilizes mock objects to mimic real objects in tests.

Involves providing random inputs to a system to check if it crashes.

Mean Time Between Failures (MTBF) calculates the average duration between equipment failures, aiding in predicting future failures or replacement needs.

Mutation testing evaluates the quality of software tests. It involves creating slight modifications in a program and checking if existing tests can detect these changes.

Negative testing verifies an application's ability to handle incorrect input, comparing expected outcomes with actual results.

Node.js is an open-source, cross-platform JavaScript runtime environment that allows developers to execute JavaScript code server-side. Traditionally, JavaScript was primarily used for client-side scripting in web browsers. Node.js, however, enables JavaScript to be used for building scalable network applications outside the browser. Built on Chrome's V8 JavaScript engine, Node.js is designed for building fast and efficient web applications, especially I/O-bound applications.

Non-functional testing evaluates software's non-functional attributes, such as usability and performance, ensuring the software's overall competence and effectiveness.

NUnit is an open-source unit testing framework for C# derived from JUnit. It facilitates writing and executing tests in .NET, with tools like NUnit-console.exe for batch test executions.

Operational testing ensures a product or service meets its operational requirements, like security, performance, and maintainability. It's a subset of non-functional acceptance testing.

A statistical approach to testing that maximizes coverage with minimal test cases.

OTT testing assesses the quality of video, data, and voice services provided over the internet, ensuring customer experience, security, and connectivity across multiple components and infrastructure.

The Page Object Model (POM) is a design pattern that consolidates web elements into an object repository, promoting code reusability and simplifying test maintenance.

Pair testing is a collaborative approach where two team members, typically a tester and a developer or analyst, work together on testing efforts.

Executing the same test using varied data sets.

Assesses the distinct paths software can take during its execution.

Peer testing involves team members evaluating each other's work, ensuring code consistency and pursuing shared goals.

Penetration Testing is a cybersecurity practice where trained professionals simulate cyberattacks on a system, network, or application to identify vulnerabilities that could be exploited by malicious actors. The primary objective of penetration testing is to discover security weaknesses from an attacker's perspective, thereby allowing organizations to better understand potential risks and take corrective actions before real-world malicious attacks occur. Penetration tests can be manual or automated and are often categorized by their scope and the knowledge level of the tester, such as black box (tester has limited knowledge about the system) or white box (tester has complete knowledge about the system).

A performance indicator or KPI is a metric testers use to measure the efficacy and quality of their testing process.

Performance testing gauges a product's capability and responsiveness under varying workloads, predicting how it would manage future demands.

Playwright is an open-source testing framework developed by Microsoft for end-to-end testing of web applications. It enables automated browser testing across multiple browsers, including Chrome, Firefox, and WebKit. With Playwright, testers and developers can write scripts that interact with web pages in a manner similar to real users, performing actions like clicking buttons, filling forms, and navigating between pages.

Postcondition is a condition that must hold true after a segment of code runs, often verified through code predicates.

Postman is a widely-used software tool that facilitates API (Application Programming Interface) development and testing. It offers a user-friendly interface that allows developers and testers to send requests to and receive responses from web services. With Postman, users can create, save, and organize HTTP requests, test APIs by sending various request types (GET, POST, PUT, DELETE, etc.), and inspect the responses. Additional features include the ability to automate tests, create mock servers, and document APIs. Postman also provides collaboration capabilities for teams, enabling them to share collections of requests, environments, and other data. Over time, Postman has evolved from a simple API testing tool into a comprehensive API development environment.

Priority denotes the order or significance of an issue based on user needs, while severity indicates its system impact. Decisions on priority and severity may vary based on roles and processes.

QA metrics are tools that developers use to enhance their product quality by refining testing processes, helping in identifying or forecasting product flaws.

Quality Assurance (QA) is a process ensuring the highest possible product or service quality. It emphasizes refining processes for consistent quality deliverables.

Quality management ensures that an organization's products or services consistently meet a certain quality standard.

Regression testing checks if existing functionalities remain intact after new changes. It ensures that new additions don't disrupt existing software operations.

Release testing evaluates a new software version to determine its readiness for release, examining its complete functionality.

Reliability Testing assesses a software's capacity to function under specific conditions. It aims to identify issues related to the software's design and functionality.

Requirements management tools oversee requirements, inform stakeholders about changes, and regulate new or adjusted requirements.

Responsive design involves dynamically adjusting a website's appearance based on screen size and device orientation, ensuring compatibility between content and display.

Retesting involves running tests on modified software to verify that changes haven't introduced new issues and that previously identified defects are resolved.

Reviewers are experts who evaluate code to detect bugs, enhance quality, and guide developers. If code spans multiple domains, it should be assessed by several experts.

Prioritizes testing based on potential risk of feature or function failure.

Evaluates software's performance under extreme or unexpected inputs.

RUP, developed by Rational (an IBM division), is a software development process segmented into four phases: business modeling, analysis and design, implementation, testing, and deployment.

Sanity testing, a subset of regression testing, ensures that code modifications function correctly. If issues arise, it halts the build.

Scalability testing confirms if a software application can expand its non-functional capabilities. It often encompasses performance and reliability assessments.

Screenshot testing automates the assessment of a web page or application's visual elements by comparing current visuals with baseline images, identifying visual regressions and other discrepancies.

Scrum is an iterative and incremental Agile framework that facilitates collaboration among a cross-functional team to develop and deliver high-quality products. In Scrum, work is broken down into cycles called "sprints," typically lasting two to four weeks, during which a predetermined set of features are developed and tested. Key roles in Scrum include the Product Owner (who defines product requirements and prioritizes tasks), the Scrum Master (who ensures the team follows Scrum practices and principles), and the Development Team (which includes testers, developers, and other necessary roles). Regular ceremonies, such as Daily Stand-ups, Sprint Planning, Sprint Review, and Sprint Retrospective, ensure consistent communication and reflection on progress and processes. In the context of software testing, Scrum emphasizes the integration of testing throughout the sprint, ensuring that features are potentially shippable by the sprint's end.

Security Testing aims to reveal potential vulnerabilities in a software system which may lead to information loss, revenue reduction, or reputational damage.

Selenium is an open-source software suite of browser automation tools primarily used for automating web browsers in the context of web application testing. It provides a way for developers and testers to write scripts in various programming languages (such as Java, C#, Python, and Ruby) to simulate user interactions with web pages and web applications.

Selenium IDE enhances your testing environment with tools for logging in, item searching, and UI interactions.

An organized form of exploratory testing conducted in sessions.

Arranging the necessary conditions for test cases to run.

Severity gauges a defect's impact on an application's system. Defects with major system repercussions are assigned higher severity levels, typically determined by the Quality Assurance Engineer.

Shift-left Testing integrates testing early in the software development process. By testing frequently and early, critical issues are identified before the deployment phase, promoting better code quality.

The SDLC (Software Development Life Cycle) is a framework for software creation that encompasses planning, implementation, testing, and product release, ensuring quality, timely delivery, and adaptability to evolving user needs.

Software quality reflects a software's capability to meet user requirements as documented in the SRS (Software Requirement Specifications). High-quality software aligns with user specifications and is maintainable, timely, and cost-effective.

Software quality management focuses on ensuring that a software application meets quality benchmarks set by users and adheres to both regulatory and development standards.

Software risk analysis inspects code violations that could compromise software stability, security, or performance.

Software testing confirms that a software product or application functions correctly, achieves its intended goals, and is free of defects.

SQL (Structured Query Language) is a standardized programming language specifically designed for managing and manipulating relational databases. SQL is used to perform tasks such as querying data, updating data, inserting data, and deleting data from a database. It also involves creating and modifying schemas (database structures) and controlling access to data. SQL provides a consistent interface to relational database management systems (RDBMS) and is supported by most modern RDBMS platforms like MySQL, PostgreSQL, SQL Server, Oracle, and many others. Through SQL, users can define, retrieve, and manipulate data within the database efficiently and effectively.

State Transition testing is a black-box testing method that observes system behavior for consecutive input conditions, using both positive and negative inputs.

Static Testing involves early-cycle assessment of software artifacts like requirements, design documents, and source code without execution. This technique identifies defects and elevates product quality, and can be manual or automated.

The STLC (Software Testing Life Cycle) outlines the sequential tasks and stages in testing software. By systematically covering tasks like planning, requirements analysis, test design, execution, and reporting, the STLC aids in risk identification and ensures the software meets its objectives.

Stress testing (Intrusive Testing) gauges the stability and resilience of a system, infrastructure, or entity under extreme conditions.

Structural Testing evaluates software code structure. Combining white-box and glass box testing, it is primarily done by developers to ensure system integrity rather than functionality.

Swagger, now often referred to as OpenAPI, is a set of tools and specifications for building, designing, and documenting RESTful APIs. It offers a standard, language-agnostic interface to RESTful APIs, allowing both humans and computers to understand the capabilities of a service without accessing its source code or further detailed documentation.

System integration testing is a technique to evaluate the entirety of a software application. It checks if both the functional and hardware components of the software harmonize.

System testing verifies interactions between software components in an integrated environment. Based on functional or design criteria, it helps identify shortcomings in the overall software functionality.

A test approach outlines the strategy for how testing will be conducted. It specifies the tasks to achieve specific testing goals in a project.

Test automation involves using tools to run tests and compare actual outcomes to expected results. These tools can streamline manual processes or integrate with continuous integration systems.

A test case is a detailed specification of test inputs, conditions, procedures, and expected outcomes. It ensures comprehensive program evaluation and identifies potential missed errors.

Test Case Management, in the realm of software testing, refers to the process of documenting, organizing, tracking, and maintaining test cases throughout the software development lifecycle. It involves creating a structured repository of test cases, associating them with specific requirements or user stories, tracking their execution status, and managing their versions and iterations.

Test classes are code fragments designed to validate the proper functioning of their associated Apex class.

Test comparison refers to the process of contrasting data from previously executed tests.

Test coverage measures the portion of a program’s code tested. It identifies which sections of code are exercised during test cases, ensuring thorough evaluation.

Test data is the input provided to systems or software for testing purposes. Varying this data ensures comprehensive application evaluation and error handling.

This is a detailed plan outlining the testing approach, features to test, and necessary requirements, cases, and procedures. It defines the testing success criteria.

Test design tools aid in creating test cases or inputs. With an automated oracle, they can determine expected results, effectively generating test cases.

TDD (Test-Driven Development) is a development methodology that prioritizes writing tests before production code. The process involves writing a test, creating minimum code to pass it, and then refining the code.

A test environment is a configured setting where tests are executed. It encompasses the necessary hardware, software, and network configurations tailored for the application under test.

Test execution is the process of running software test cases to verify adherence to user requirements. It is pivotal in the software testing and development life cycles, starting after test planning.

This involves using automation tools for test execution, either directly or via a management tool. The concluding test report offers a summarized account of the project’s testing.

This schedule orchestrates sequential test steps, either at preset times or upon build completion triggers.

These techniques enhance test execution through planning, strategies, and tactics, impacting how tests are conducted rather than the test run itself.

Such tools evaluate software against specific test scenarios, comparing results to expected outcomes. Known also as capture/playback or record/playback tools, they document manual tests.

A test harness is a suite of auxiliary tools, including stubs and drivers, used during testing. It utilizes a test library to run tests and generate reports.

Test infrastructure encompasses both software and hardware required for smooth software application operations. It integrates activities and methods to optimize test speed, enabling quicker releases.

A test log is an essential document detailing a test run’s summary, capturing both successful and failed tests. It provides insights into test operations, issues’ origins, and failure reasons, facilitating post-run analysis.

Test management supervises the testing processes, documentation, and other software aspects, ensuring thorough testing and high-quality software delivery.

Test observability denotes the capability to monitor a system during testing, analyzing its performance to pinpoint and rectify issues. It aggregates data like logs, metrics, and traces for insights and improvements.

Mechanisms to determine if a test is successful or not.

A document detailing the objectives and activities of testing. Prepared by the test lead, it communicates the testing approach, pass/fail criteria, stages, and other vital information to the project team and stakeholders. It also covers potential risks and contingency plans.

A document set by senior management, outlining the principles and approaches the organization will adopt for testing its products.

A systematic set of tasks and activities aimed at ensuring a software application adheres to its requirements and quality standards. This process includes test preparation, creation, execution, and reporting.

An evaluation that compares an organization's testing activities to industry standards, offering an objective assessment of the organization's testing performance.

A framework that assists developers in evaluating the mix of tests in an automated suite. It aims to expedite the detection of issues when changes are made to the codebase.

A summary of testing objectives, activities, and results, designed to inform stakeholders about product quality and its readiness for release.

A tool that automates the running of test cases and the collection of results, ensuring software functions as intended. Can be a GUI or command-line tool.

Outlines a user action at a high level. It is broader than a detailed test case.

Contains specific instructions for the system during a test.

Generative drafts for test design, allowing for iterative test development. These guidelines help compare new test versions with previous ones.

A document detailing the methodology adopted for software testing. It provides clarity on the testing approach tailored to achieve organizational testing objectives.

Simulates the behavior of components that are absent.

A collection of tests examining application features. Automated test suites execute these tests to provide pass/fail results. Automated suites offer repeatability and reduce human error.

Test tools assist in various test activities, from planning to analysis. They identify input fields and their valid value ranges, often in tandem with test management or CASE tools.

Conducting tests within a predefined time frame.

A testing method starting with high-level modules, progressing to lower-level ones. Stubs are used to simulate lower module responses until they are integrated.

A table-type document tracking software requirements. It supports both forward and backward tracing of requirements to code and vice versa.

Evaluation of a web application's user interface to identify glitches and ensure it aligns with specified requirements.

Tools designed for creating and executing unit tests, offering foundational structures for testing and reporting outcomes.

The practice of testing individual software units or components to validate their functionality.

A qualitative research method providing insights into user interactions with software. It identifies usability issues and evaluates user-friendliness.

A description detailing how a user interacts with a system. It forms a foundation for system development and tests.

A testing approach examining all potential user interactions with software. It is especially useful for assessing error-handling and system robustness.

A testing phase where the customer validates the software in its intended environment before release, ensuring alignment with their expectations.

An evaluation of specific development stage requirements, ensuring the final product aligns with customer expectations.

Activities focused on ensuring software correctly implements specific functionalities by comparing it against design specifications.

Evaluation of the user interface after code changes. It checks for appearance and usability impacts, ensuring new changes don't disrupt existing functions.

A development model that aligns with the product's validation phase.

Challenges the system by subjecting it to large amounts of data.

Programmatic operation of websites through test scripts and tools, replacing manual tasks to save time and reduce costs.

An open-source framework for browser automation, enabling automated tests for web pages across various browsers and operating systems.

Evaluation of a web application's speed, responsiveness, and stability under varying loads. It identifies and addresses potential bottlenecks.

Tools aiding in product quality assurance. They support continuous integration, agile development, and DevOps amidst evolving demands.

A crucial evaluation for web developers, assessing the functionality, usability, compatibility, security, and performance of web applications.

Evaluation of software's internal coding and architecture. It emphasizes security, input-output flow, design, and usability.

A language designed to extract and manipulate XML document data. Useful for retrieving XML data for content scanning.

About

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.

Topics

Resources

Stars

Watchers

Forks