What is System Integration Testing (SIT) with Example
System Integration Testing is defined as a type of software testing carried out in an integrated hardware and software environment to verify the behavior of the complete system. It is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirement.
System Integration Testing (SIT) is performed to verify the interactions between the modules of a software system. It deals with the verification of the high and low-level software requirements specified in the Software Requirements Specification/Data and the Software Design Document.
It also verifies a software system's coexistence with others and tests the interface between modules of the software application. In this type of testing, modules are first tested individually and then combined to make a system.
For Example, software and/or hardware components are combined and tested progressively until the entire system has been integrated.
In this tutorial, you will learn-
- What is System Integration Testing?
- Why do System Integration Testing
- How to do System Integration Testing
- Entry and Exit Criteria for Integration Testing
- Hardware to Software Integration Testing
- Software to Software Integration Testing
- Top-Down Approach
- Bottom-up Approach
- Big Bang Approach
In Software Engineering, System Integration Testing is done because,
- It helps to detect Defect early
- Earlier feedback on the acceptability of the individual module will be available
- Scheduling of Defect fixes is flexible, and it can be overlapped with development
- Correct data flow
- Correct control flow
- Correct timing
- Correct memory usage
- Correct with software requirements
It's a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing.
All modules are integrated in advance, and the entire program is tested as a whole. But during this process, a set of errors is likely to be encountered.
Correction of such errors is difficult because isolation causes is complicated by the vast expansion of the entire program. Once these errors are rectified and corrected, a new one will appear, and the process continues seamlessly in an endless loop. To avoid this situation, another approach is used, Incremental Integration. We will see more detail about an incremental approach later in the tutorial.
There are some incremental methods like the integration tests are conducted on a system based on the target processor. The methodology used is Black Box Testing. Either bottom-up or top-down integration can be used.
Test cases are defined using the high-level software requirements only.
Software integration may also be achieved largely in the host environment, with units specific to the target environment continuing to be simulated in the host. Repeating tests in the target environment for confirmation will again be necessary.
Confirmation tests at this level will identify environment-specific problems, such as errors in memory allocation and de-allocation. The practicality of conducting software integration in the host environment will depend on how much target specific functionality is there. For some embedded systems the coupling with the target environment will be very strong, making it impractical to conduct software integration in the host environment.
Large software developments will divide software integration into a number of levels. The lower levels of software integration could be based predominantly in the host environment,with later levels of software integration becoming more dependent on the target environment.
Note: If software only is being tested then it is called Software Software Integration Testing [SSIT] and if both hardware and software are being tested, then it is called Hardware Software Integration Testing [HSIT].
Usually while performing Integration Testing, ETVX (Entry Criteria, Task, Validation, and Exit Criteria) strategy is used.
- Completion of Unit Testing
- Software Requirements Data
- Software Design Document
- Software Verification Plan
- Software Integration Documents
- Based on the High and Low-level requirements create test cases and procedures
- Combine low-level modules builds that implement a common functionality
- Develop a test harness
- Test the build
- Once the test is passed, the build is combined with other builds and tested until the system is integrated as a whole.
- Re-execute all the tests on the target processor-based platform, and obtain the results
- Successful completion of the integration of the Software module on the target Hardware
- Correct performance of the software according to the requirements specified
- Integration test reports
- Software Test Cases and Procedures [SVCP].
Hardware Software Integration Testing is a process of testing Computer Software Components (CSC) for high-level functionalities on the target hardware environment. The goal of hardware/software integration testing is to test the behavior of developed software integrated on the hardware component.
Requirement based Hardware-Software Integration Testing
The aim of requirements-based hardware/software integration testing is to make sure that the software in the target computer will satisfy the high-level requirements. Typical errors revealed by this testing method includes:
- Hardware/software interfaces errors
- Violations of software partitioning.
- Inability to detect failures by built-in test
- Incorrect response to hardware failures
- Error due to sequencing, transient input loads and input power transients
- Feedback loops incorrect behavior
- Incorrect or improper control of memory management hardware
- Data bus contention problem
- Incorrect operation of mechanism to verify the compatibility and correctness of field loadable software
Hardware Software Integration deals with the verification of the high-level requirements. All tests at this level are conducted on the target hardware.
- Black box testing is the primary testing methodology used at this level of testing.
- Define test cases from the high-level requirements only
- A test must be executed on production standard hardware (on target)
Things to consider when designing test cases for HW/SW Integration
- Correct acquisition of all data by the software
- Scaling and range of data as expected from hardware to software
- Correct output of data from software to hardware
- Data within specifications (normal range)
- Data outside specifications (abnormal range)
- Boundary data
- Interrupts processing
- Correct memory usage (addressing, overlaps, etc.)
- State transitions
Note: For interrupt testing, all interrupts will be verified independently from initial request through full servicing and onto completion. Test cases will be specifically designed in order to adequately test interrupts.
It is the testing of the Computer Software Component operating within the host/target computer
Environment, while simulating the entire system [other CSC's], and on the high-level functionality.
It focuses on the behavior of a CSC in a simulated host/target environment. The approach used for Software Integration can be an incremental approach ( top-down, a bottom-up approach or a combination of both).
Incremental testing is a way of integration testing. In this type of testing method, you first test each module of the software individually and then continue testing by appending other modules to it then another and so on.
Incremental integration is the contrast to the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct. Interfaces are more likely to be tested completely, and a systematic test approach may be applied.
There are two types of Incremental testing
- Top down approach
- Bottom Up approach
In this type of approach, individual start by testing only the user interface, with the underlying functionality simulated by stubs, then you move downwards integrating lower and lower layers as shown in the image below.
- Starting with the main control module, the modules are integrated by moving downward through the control hierarchy
- Sub-modules to the main control module are incorporated into the structure either in a breadth-first manner or depth-first manner.
- Depth-first integration integrates all modules on a major control path of the structure as displayed in the following diagram:
The module integration process is done in the following manner:
- The main control module is used as a test driver, and the stubs are substituted for all modules directly subordinate to the main control module.
- The subordinate stubs are replaced one at a time with actual modules depending on the approach selected (breadth first or depth first).
- Tests are executed as each module is integrated.
- On completion of each set of tests, another stub is replaced with a real module on completion of each set of tests
- To make sure that new errors have not been introduced Regression Testing may be performed.
The process continues from step2 until the entire program structure is built. The top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise.
The most common of these problems occur when processing at low levels in the hierarchy is required to adequately test upper levels.
Stubs replace low-level modules at the beginning of top-down testing and, therefore no significant data can flow upward in the program structure.
Challenges Tester might face:
- Delay many tests until stubs are replaced with actual modules.
- Develop stubs that perform limited functions that simulate the actual module.
- Integrate the software from the bottom of the hierarchy upward.
Note: The first approach causes us to lose some control over correspondence between specific tests and incorporation of specific modules. This may result in difficulty determining the cause of errors which tends to violate the highly constrained nature of the top-down approach.
The second approach is workable but can lead to significant overhead, as stubs become increasingly complex.
Bottom-up integration begins construction and testing with modules at the lowest level in the program structure. In this process, the modules are integrated from the bottom to the top.
In this approach processing required for the modules subordinate to a given level is always available and the need for the stubs is eliminated.
This integration test process is performed in a series of four steps
- Low-level modules are combined into clusters that perform a specific software sub-function.
- A driver is written to coordinate test case input and output.
- The cluster or build is tested.
- Drivers are removed, and clusters are combined moving upward in the program structure.
As integration moves upward, the need for separate test drivers lessons. In fact, if the top two levels of program structure are integrated top-down, the number of drivers can be reduced substantially, and integration of clusters is greatly simplified. Integration follows the pattern illustrated below. As integration moves upward, the need for separate test drivers lessons.
Note: If the top two levels of program structure are integrated Top-down, the number of drivers can be reduced substantially, and the integration of builds is greatly simplified.
In this approach, all modules are not integrated until and unless all the modules are ready. Once they are ready, all modules are integrated and then its executed to know whether all the integrated modules are working or not.
In this approach, it is difficult to know the root cause of the failure because of integrating everything at once.
Also, there will be a high chance of occurrence of the critical bugs in the production environment.
This approach is adopted only when integration testing has to be done at once.
- Integration is performed to verify the interactions between the modules of a software system. It helps to detect defect early
- Integration testing can be done for Hardware-Software or Hardware-Hardware Integration
- Integration testing is done by two methods
- Incremental approach
- Big bang approach
- While performing Integration Testing generally ETVX (Entry Criteria, Task, Validation, and Exit Criteria) strategy is used.