Real people, real devices, on-demand.
While there are many types of software testing, two basic and paramount categories are Positive and Negative Testing.
Positive Testing is a type of testing whereby a valid data set is entered as the input. In this case, it confirms whether or not software is working as expected by using positive inputs. In short, this form of testing seeks to confirm that the software does exactly what it’s supposed to do.
For example, let’s say you have a text box that will turn green when you enter numbers. If you enter three numbers, the box should turn green. If it does, then that would be an accepted positive test. If it doesn’t, then it has failed the positive test; in other words, it doesn’t work as expected or as it should.
On the other hand, Negative Testing is a type of testing whereby an invalid data set is entered as the input. In this case, it confirms that the software is not working as expected by using negative, or incorrect, inputs. This form of testing confirms that the software does not do anything it should not do with the incorrect input.
Using the same text box example from above, say you enter three letters instead of numbers. The box should not turn green. If it does turn green, that would be a failed negative test; in other words, if numbers are not entered, it should not turn green.
Both types of testing are most commonly applied in test cases and typically entail two parameters, boundary value analysis and equivalence partitioning.
Boundary value analysis is a technique whereby tests are designed with preset boundary values in a range. This prevents input values from being placed at extreme ends of the input domain (the farther apart they appear, the greater the chance of system error). Thus, the focus of boundary value analysis is to find the errors existing at the boundaries as opposed to the center.
Equivalence partitioning entails separating input data into partitions of equivalent data from which test cases can be selected. This parameter seeks to define types of errors (trends) found in test cases in order to reduce the total amount of cases that must be written out.
Many critics argue that negative testing is too similar to positive testing to provide any sort of novel insight. However, while similar, they are certainly not identical, and using both forms of testing in tandem can allow for the most comprehensive testing measure. While positive testing affirms that the given use case is valid, or correct, negative testing can prove that the software is devoid of issues that may deter a customer from successfully utilizing it.