We continue to discuss the topic of software testing. To get acquainted with the beginning of this topic – read the first part of this article.
Here are the basic concepts that are used for any testing.
A check list is a document describing what should be tested. In this case, the checklist can be of absolutely different levels of detail. How much the check-list will be detailed depends on the reporting requirements, the level of product knowledge of employees and the complexity of the product.
As a rule, the checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test scenario. It is appropriate to use it when test scenarios are redundant. Also, the checklist is associated with flexible approaches to testing.
A defect (aka a bug) is a discrepancy between the actual result of the program execution and the expected result. Defects are detected at the software testing stage (software), when the tester compares the results of the program (component or design) with the expected result described in the requirements specification.
Error is a user error, that is, it tries to use the program in a different way.
The example enters letters in the fields where you want to enter numbers (age, quantity of goods, etc.).
In a quality program such situations are provided and an error message is issued, with a red cross that.
Bug (defect) is the error of the programmer (or designer or someone else who takes part in the development), that is, when in the program, something goes wrong as planned, and the program goes out of control. For example, when user input is not controlled in any way, as a result incorrect data causes crashes or other “joys” in program operation. Either the program inside is built in such a way that it does not initially correspond to what is expected of it.
Failure is a failure (and not necessarily hardware) in the operation of the component, the entire program or system. That is, there are defects that lead to failures (A defect caused the failure) and there are those that do not. UI-defects for example. But the hardware failure, not associated with software, is also a failure.
A Bug Report is a document describing a situation or sequence of actions that led to an incorrect operation of the test object, indicating the reasons and the expected result.
Document header for testing:
– Summary – A short description of the problem, clearly indicating the reason and type of the error situation.
– Project – Name of the project under test
– Component – Name of part or function of the tested product
– Version – The version on which the error was found
– Severity – The most common five-level grading system is the severity of the defect:
- S1 Blocker
- S2 Critical
- S3 Major
- S4 Minor
- S5 Trivial
– Priorities – Defect priority:
- P1 High
- P2 Medium
- P3 Low
– Status – Bug status. Depends on the procedure used and the life cycle of the bug (bug workflow and lifecycle)
– Author – Creator of the bug report
– Assigned To – Name of the employee assigned to solve the problem
OS / Service Pack, etc. / Browser + version / … Information about the environment on which the bug was found: operating system, service pack, for WEB testing – name and version of the browser, etc.
- Steps to Reproduce – Steps by which you can easily reproduce the situation that led to the error.
- Result – The result obtained after passing through the steps to playback
- Result – Expected correct result
- Attachment – A file with logs, a screenshot or any other document that can help clarify the cause of the error or indicate a way to solve the problem
Severity vs Priority:
Severity is an attribute that characterizes the effect of a defect on the health of an application.
Priority (Priority) is an attribute that indicates the order of execution of a task or the elimination of a defect. You can say that this is a tool for the planning manager. The higher the priority, the faster you need to fix the defect.
Severity is set by the tester
Priority – manager, teamlid or customer
A blocking error that causes the application to become inoperable, as a result of which further work with the system under test or its key functions becomes impossible. The solution of the problem is necessary for the further functioning of the system.
A critical error, an incorrectly working key business logic, a security hole, a problem that resulted in a temporary server crash or causing a part of the system to fail, without the possibility of solving the problem using other input points. The solution of the problem is necessary for further work with the key functions of the system under test.
A significant mistake, part of the main business logic is not working correctly. The error is not critical or there is an opportunity to work with the function under test using other input points.
A minor error that does not violate the business logic of the tested part of the application, an obvious problem of the user interface.
A trivial error that does not concern the business logic of the application, a poorly reproducible problem, barely visible through the user interface, the problem of third-party libraries or services, a problem that does not have any impact on the overall quality of the product.
The error should be fixed as soon as possible, because its availability is critical for the project.
The error must be corrected, its availability is not critical, but requires a binding decision.
The error must be corrected, its availability is not critical, and does not require an urgent solution.
- Unit Testing
Component testing tests functionality and looks for defects in parts of the application that are accessible and can be tested separately (program modules, objects, classes, functions, etc.).
- Integration Testing
Verifies the interaction between the components of the system after the component testing.
- System Testing (System Testing)
The main task of system testing is to check both functional and non-functional requirements in the system as a whole. Defects such as incorrect use of system resources, unintended combinations of user-level data, incompatibility with the environment, unintended use scenarios, missing or incorrect functionality, inconvenience of use, etc. are detected.
- Operational testing (Release Testing).
Even if the system meets all the requirements, it is important to make sure that it meets the user’s needs and performs its role in its operating environment, as defined in the business model of the system. It should be noted that the business model may contain errors. Therefore, it is so important to conduct operational testing as the final step of validation. In addition, testing in the operating environment allows identifying non-functional problems, such as: conflict with other systems that are adjacent in the field of business or in software and electronic environments; insufficient system performance in the environment of operation, etc. Obviously, finding such things at the implementation stage is a critical and costly problem. Therefore, it is important to carry out not only verification, but also validation, from the earliest stages of software development.
- Acceptance Testing
A formal testing process that verifies that the system meets the requirements and is conducted with the aim of:
- whether the system satisfies the acceptance criteria;
- the decision is made by the customer or another authorized person whether the application is accepted or not.
In the next part, we’ll talk about the types and types of testing and discuss them in detail.