Software Testing Metric is be defined as a quantitative measure that helps to estimate the progress, quality, and health of a software testing effort. A Metric defines in quantitative terms the degree to which a system, system component, or process possesses a given attribute.
The ideal example to understand metrics would be a weekly mileage of a car compared to its ideal mileage recommended by the manufacturer.
Software testing metrics - Improves the efficiency and effectiveness of a software testing process.
Software testing metrics or software test measurement is the quantitative indication of extent, capacity, dimension, amount or size of some attribute of a process or product.
Example for software test measurement: Total number of defects
In this tutorial, you will learn-
- What is Software Testing Metric?
- Why Test Metrics are Important?
- Types of Test Metrics
- Manual Test Metrics
- Test Metrics Life Cycle
- How to calculate Test Metric
- Example of Test Metric
- Test Metrics Glossary
"We cannot improve what we cannot measure" and Test Metrics helps us to do exactly the same.
- Take decision for next phase of activities
- Evidence of the claim or prediction
- Understand the type of improvement required
- Take decision or process or technology change
Read more about its Importance of Test Metrics
- Process Metrics: It can be used to improve the process efficiency of the SDLC ( Software Development Life Cycle)
- Product Metrics: It deals with the quality of the software product
Project Metrics: It can be used to measure the efficiency of a project team or any testing tools being used by the team members
Identification of correct testing metrics is very important. Few things need to be considered before identifying the test metrics
- Fix the target audience for the metric preparation
- Define the goal for metrics
- Introduce all the relevant metrics based on project needs
- Analyze the cost benefits aspect of each metrics and the project lifestyle phase in which it results in the maximum output
In Software Engineering, Manual test metrics are classified into two classes
- Base Metrics
- Calculated Metrics
Base metrics is the raw data collected by Test Analyst during the test case development and execution (# of test cases executed, # of test cases). While calculated metrics are derived from the data collected in base metrics. Calculated metrics is usually followed by the test manager for test reporting purpose (% Complete, % Test Coverage).
Depending on the project or business model some of the important metrics are
- Test case execution productivity metrics
- Test case preparation productivity metrics
- Defect metrics
- Defects by priority
- Defects by severity
- Defect slippage ratio
Different stages of Metrics life cycle
Steps during each stage
|Sr#||Steps to test metrics||Example|
|1||Identify the key software testing processes to be measured||
|2||In this Step, the tester uses the data as a baseline to define the metrics||
|3||Determination of the information to be followed, a frequency of tracking and the person responsible||
|4||Effective calculation, management, and interpretation of the defined metrics||
|5||Identify the areas of improvement depending on the interpretation of defined metrics||
To understand how to calculate the test metrics, we will see an example of a percentage test case executed.
To obtain the execution status of the test cases in percentage, we use the formula.
Percentage test cases executed= (No of test cases executed/ Total no of test cases written) X 100
Likewise, you can calculate for other parameters like test cases not executed, test cases passed, test cases failed, test cases blocked, etc.
- Rework Effort Ratio = (Actual rework efforts spent in that phase/ total actual efforts spent in that phase) X 100
- Requirement Creep = ( Total number of requirements added/No of initial requirements)X100
- Schedule Variance = ( Actual efforts – estimated efforts ) / Estimated Efforts) X 100
- Cost of finding a defect in testing = ( Total effort spent on testing/ defects found in testing)
- Schedule slippage = (Actual end date – Estimated end date) / (Planned End Date – Planned Start Date) X 100
- Passed Test Cases Percentage = (Number of Passed Tests/Total number of tests executed) X 100
- Failed Test Cases Percentage = (Number of Failed Tests/Total number of tests executed) X 100
- Blocked Test Cases Percentage = (Number of Blocked Tests/Total number of tests executed) X 100
- Fixed Defects Percentage = (Defects Fixed/Defects Reported) X 100
- Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team /Total Defects Reported) X 100
- Defects Deferred Percentage = (Defects deferred for future releases /Total Defects Reported) X 100
- Critical Defects Percentage = (Critical Defects / Total Defects Reported) X 100
- Average time for a development team to repair defects = (Total time taken for bugfixes/Number of bugs)
- Number of tests run per time period = Number of tests run/Total time
- Test design efficiency = Number of tests designed /Total time
- Test review efficiency = Number of tests reviewed /Total time
- Bug find rote or Number of defects per test hour = Total number of defects/Total number of test hours