Software testing is a consistency check between the actual and the expected behavior of the program, carried out in the final set of tests selected in a certain way. In a broader sense, testing is one of the quality control techniques that includes activities for Test Management, Test Design, Test Execution, and Test Analysis.
Software Quality (Software Quality) is a collection of software features related to its ability to meet defined and perceived needs. [ISO 8402: 1994 Quality management and quality assurance]
Verification is the process of evaluating a system or its components in order to determine whether the results of the current development phase satisfy the conditions formed at the beginning of this stage [IEEE]. Ie whether our goals, terms, tasks for the development of the project, determined at the beginning of the current phase, are being fulfilled.
Validation is the determination of the conformity of the software being developed to the expectations and needs of the user, the system requirements [BS7925-1].
You can also find another interpretation:
-The process of evaluating the product’s compliance with explicit requirements (specifications) and verification (verification), while evaluating the product’s compliance with expectations and user requirements-is validation. It is also often possible to find the following definition of these concepts:
Validation – ‘is this the right specification?’.
Verification – ‘is the system correct to specification?’.
Objectives of testing
– Increase the likelihood that an application designed for testing will work correctly under any circumstances.
– Increase the likelihood that the application intended for testing will meet all the requirements described.
– Providing up-to-date information about the state of the product at the moment.
Stages of testing:
- Product Analysis
- Work with requirements
- Development of a testing strategy and planning of quality control procedures
- Creating test documentation
- Testing the prototype
- Basic testing
The Test Plan is a document describing the whole amount of testing work, starting with the description of the object, strategy, timetable, criteria for starting and ending testing, up to the necessary equipment, special knowledge, as well as risk assessment with options for their resolution .
Answers the questions:
– What should you test?
– What will you test?
– How will you test?
– When will you test?
– Criteria for starting testing.
– Criteria for the end of testing.
The main points of the test plan
The IEEE 829 standard lists the items from which the test plan should (if possible):
a) Test plan identifier;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
m) Staffing and training needs;
o) Risks and contingencies;
Test design is the stage of the software testing process, on which test cases (test cases) are designed and created, in accordance with the previously defined quality criteria and testing objectives.
Roles responsible for the test design:
- Analyst test – defines “WHAT to test?”
- Test designer – defines “HOW to test?”
Design Test Techniques
- Equivalence Partitioning (EP). As an example, you have a range of valid values from 1 to 10, you have to choose one valid value within the interval, say 5, and one wrong value outside the interval is 0.
- Boundary Value Analysis (BVA). If we take the example above, as the values for positive testing, we choose the minimum and maximum boundaries (1 and 10), and the values are larger and smaller than the boundaries (0 and 11). Analysis Boundary values can be applied to fields, records, files, or to any kind of entities that have constraints.
- Cause / Effect (CE). This is, as a rule, the input of combinations of conditions (causes), to obtain a response from the system (Corollary). For example, you check the ability to add a client using a specific screen form. To do this, you will need to enter several fields, such as “Name”, “Address”, “Phone Number” and then click “Add” is “Reason”. After clicking the “Add” button, the system adds the client to the database and displays its number on the screen – this is “Corollary”.
- Error Guessing (EG). This is when the tester uses his knowledge of the system and the ability to interpret the specification for “guessing” under what input conditions the system can produce an error. For example, the specification says: “the user must enter the code.” The tester will think, “What if I do not enter the code?”, “What if I enter the wrong code?”, And so on. This is the error prediction.
- Exhaustive Testing (ET) is an extreme case. Within this technique, you should check all possible combinations of input values, and in principle, this should find all the problems. In practice, the application of this method is not possible, because of the huge number of input values.
- Pairwise Testing is a technique for generating sets of test data. To formulate the essence it is possible, for example, like this: the formation of such data sets in which each test value of each of the tested parameters is at least once combined with each tested value of all other parameters being checked.
Traceability matrix – a two-dimensional table that contains the correspondence of the functional requirements of the product and the prepared test cases. In the headings of the columns of the table there are requirements, and in the header of the lines – test scenarios. At the intersection is a mark, meaning that the requirement of the current column is covered by the test script of the current line.
The compliance matrix is used by QA engineers to validate product coverage with tests. The MCT is an integral part of the test plan.
The Test Case is an artifact that describes the totality of steps, the specific conditions and parameters required to test the implementation of a function or part of it.
Each test case must have 3 parts:
- PreConditions A list of actions that lead the system to a state suitable for basic validation. Or a list of conditions, the fulfillment of which indicates that the system is in a condition suitable for conducting the basic test.
- Test Case Description List of actions that transfer the system from one state to another, to obtain the result, on the basis of which it can be concluded that the implementation is satisfied, the requirements
- PostConditions A list of actions that take the system to its original state (state before the test – initial state)
Types of Test Cases
Test cases are divided according to the expected result into positive and negative ones:
- A positive test case uses only the correct data and verifies that the application has correctly performed the called function.
- Negative test case operates both correct and incorrect data (at least 1 incorrect parameter) and aims to check for exceptional situations (validation of validators), and also checks that the function called by the application is not executed when the validator is triggered.