Non-Functional Testing
This time let’s take a closer look at the non-functional types of testing.
Non-functional testing describes the tests necessary to determine the characteristics of software that can be measured by different values. In general, this is testing how the system works. The following are the main types of non-functional tests:
- All types of performance testing: Performance and Load Testing; Stress Testing; Stability / Reliability Testing; Volume Testing
- Installation testing
- Usability Testing
- Failover and Recovery Testing
- Configuration Testing
Load testing or performance testing
Load testing or performance testing is automated testing that simulates the operation of a certain number of business users on a shared (shared) resource.
Basic types of performance testing
Consider the main types of load testing, as well as the tasks that confront them.
Performance testing
The task of performance testing is to determine the scalability of the application under load, while:
- measuring the execution time of the selected operations at certain intensities of performing these operations
- determine the number of users simultaneously working with the application
- defining the boundaries of acceptable performance with increasing load (with increasing intensity of these operations)
- study of performance at high, marginal, stressful loads.
Stress Testing
Stress testing allows you to check whether the application and the system as a whole are working under stress conditions and also assess the ability of the system to regenerate, i.e. to return to normal after the cessation of stress. Stress in this context can be an increase in the intensity of operations to very high values or an emergency change in the configuration of the server. Also, one of the tasks in stress testing can be the evaluation of performance degradation, so stress testing goals may overlap with performance testing objectives.
Volume Testing
The task of volume testing is to obtain a performance estimate when you increase the amount of data in the application database, while:
- measuring the execution time of the selected operations at certain intensities of performing these operations
- the number of users working simultaneously with the application can be determined.
Stability / Reliability Testing
The task of testing stability (reliability) is to test the performance of the application for long (many hours) testing with an average load level. The time of performing operations may play a secondary role in this type of testing. In this case, the first place is the absence of memory leaks, server restarts under load and other aspects that affect the stability of the work.
Load vs Performance Testing
In English-language terminology, you can also find another type of testing – Load Testing – testing the system response to load changes (in the limit of allowable). It seemed to us, that Load and Performance pursue all the same the same purpose: performance check (response time) on different loads. That’s why we did not begin to separate them. At the same time, someone can do the opposite. The main thing is to understand the goals of this or that type of testing and try to reach them.
Installation Testing
Installation testing is aimed at checking the successful installation and configuration, as well as updating or uninstalling the software.
Currently, the most common installation software with installers (special programs, which in themselves also require proper testing).
In actual conditions, installers may not be. In this case, you will have to perform the installation of the software yourself, using the documentation in the form of instructions or readme files, step by step, describing all the necessary actions and checks.
In distributed systems, where an application is deployed on an already running environment, a simple set of instructions can be small. To do this, you often write a Deployment Plan, which includes not only steps to install the application, but also roll-back steps to the previous version, in case of failure. The installation plan itself must also undergo a testing procedure to avoid problems when issued in real operation. This is especially true if the installation is performed on systems where every minute of idle time is a loss of reputation and a large amount of funds, for example: banks, financial companies or even banner networks. Therefore, testing the installation can be called one of the most important tasks to ensure the quality of software.
Such an integrated approach with writing plans, step-by-step testing of the installation and rollback of the installation, can rightfully be called Installation Testing.
Usability Testing
Sometimes we come across incomprehensible, illogical applications, many functions and ways of using which are often not obvious. After such work, there is rarely a desire to use the application again, and we are looking for more convenient analogs. In order for the application to be popular, it is not enough to be functional – it should also be convenient. If to reflect, intuitively clear applications save nerves to users and expenses of the employer on training. So they are more competitive! Therefore, ease of use testing, which will be discussed further, is an integral part of testing any mass products.
Usability testing is a testing method aimed at establishing the degree of ease of use, learning, understanding and attractiveness for users of the developed product in the context of given conditions. [ISO 9126]
Usability testing provides an assessment of the ease of use of the application for the following items:
- productivity, efficiency – how much time and steps will the user need to complete the main tasks of the application, for example, posting news, registering, buying, etc.? (less is better)
- accuracy – how many mistakes did the user make while working with the application? (less is better)
- activation in memory – how much does the user remember about the application after suspending work with it for a long period of time? (re-execution of operations after a break must pass faster than a new user)
- emotional response – how does the user feel after completing the task – confused, stressed out? Will the user recommend the system to his friends? (a positive reaction is better)
Levels of conduct
Usability testing can be carried out both with respect to the finished product, byblack box testing, and to the application interfaces (API). In this case, the convenience of using objects, classes, methods and variables is checked, and the convenience of changing, expanding the system and integrating it with other modules or systems is considered. Using convenient interfaces (API) can improve the quality, increase the speed of writing and supporting the developed code, and as a result.
Hence, it becomes obvious that user experience can be carried out at different levels of software development: modular, integration, system and acceptance. In doing so, it will be entirely dependent on who will use the application on a dedicated, specific level, the developer, the business user of the system, etc.
Tips for improving ease of use
For the design of user-friendly applications, it is useful to follow the fail-safe principles. For us it is more known as a “foolproof”. A simple example, if a field requires a digital value, it is logical to restrict the user to the input range only in digits – there will be less random errors.
To improve the usability of existing applications, you can use the Plan-Do-Check-Act Demming cycle, collecting feedback on the work and design of the application from existing users, and, in accordance with their comments, planning and conducting improvements.
Misconceptions about testing ease of use
- User Interface Testing = User Experience Testing
Usability testing has nothing to do with testing the functionality of the user interface, it is only carried out on the user interface as well as on many other possible components of the product. In this case, the type of testing and test cases will be completely different, since it may be about the convenience of using non-visual components (if any) or the administration process, for example, a distributed client-server product, etc.
- Usability testing can be done without the participation of an expert
Not always a person not versed in the subject domain is able to hold it independently. Imagine that the tester needs to test the convenience of using a strategic bomber. He will have to test the main functions: the convenience of combat, navigation, piloting, maintenance, ground transportation, etc. Obviously, without the expert’s involvement, this will be very problematic, and one can even say that it is impossible.
Failover and Recovery Testing
Failover and Recovery Testing tests the product in terms of its ability to withstand and recover successfully after possible failures due to software errors, hardware failures, or communication problems (for example, network failure). The purpose of this type of testing is to check the recovery systems (or duplicate the main functionality of the systems), which, in case of failures, will ensure the integrity and integrity of the data of the tested product.
Testing for failure and recovery is very important for systems operating on the principle of “24×7”. If you create a product that will work, for example on the Internet, then without carrying out this type of testing you just can not do. Because every minute of idle time or loss of data in case of equipment failure can cost you money, loss of customers and reputation in the market.
The method of such testing is to simulate various failure conditions and then study and evaluate the response of protective systems. In the course of such checks, it turns out whether the required degree of system recovery was achieved after the failure occurred.
For clarity, let us consider some variants of such testing and general methods for conducting them. The object of testing in most cases are very likely operational problems, such as:
- Power failure on the server computer
- Electricity failure on the client computer
- Unfinished data processing cycles (interrupting the operation of data filters, interrupting synchronization).
- Declaring or entering in the data sets of impossible or erroneous elements.
- Failure of data carriers.
These situations can be reproduced as soon as a certain point in the development is reached, when all the recovery or duplication systems are ready to perform their functions. Technically, the tests can be performed in the following ways:
- Simulate a sudden failure of electricity on the computer (to de-energize the computer).
- Simulate the loss of communication with the network (turn off the network cable, disconnect the network device)
- Simulate media failure (de-energize external storage medium)
- To simulate the situation of the presence in the system of incorrect data (special test set or database).
When the appropriate failure conditions and the results of the recovery systems are reached, it is possible to evaluate the product from the point of view of failure testing. In all the above cases, upon completion of the recovery procedures, a certain required state of the product data must be achieved:
- Loss or corruption of data within acceptable limits.
- A report or a reporting system indicating processes or transactions that were not completed as a result of a failure.
It’s worth noting that testing for failure and recovery is very product-specific testing. The development of test scenarios should be done taking into account all the features of the system under test. Taking into account rather strict methods of influence, it is also worthwhile to evaluate the feasibility of carrying out this type of testing for a specific software product.
Configuration testing
Configuration Testing is a special type of testing aimed at testing the operation of the software under various system configurations (declared platforms, supported drivers, various computer configurations, etc.)
Depending on the type of project, configuration testing can have different purposes:
- System Profiling Project
Test Purpose: determine the optimal configuration of the equipment, providing the required performance characteristics and reaction time of the system under test.
- Project on migration of the system from one platform to another
Test Purpose: Check the test object for compatibility with the hardware, operating systems and third-party software products declared in the specification.
Levels of testing
For client-server applications, configuration testing can be conditionally divided into two levels (for some types of applications, only one can be relevant):
- Server
- Customer
At the first (server) level, the interaction of the released software with the environment in which it is installed is tested:
- Hardware (type and number of processors, memory capacity, network / network adapter characteristics, etc.)
- Software (OS, driver and libraries, third-party software, affecting the operation of the application, etc.)
The main focus here is on testing in order to determine the optimal configuration of equipment that meets the required quality characteristics (efficiency, portability, ease of maintenance, reliability).
At the next (client) level, the software is tested from the position of its end user and the configuration of its workstation. At this stage, the following characteristics will be tested: usability, functionality. For this, it will be necessary to conduct a series of tests with different configurations of workstations:
- The type, version and bit depth of the operating system (a similar type of testing is called cross-platform testing)
- Type and version of the Web browser, if a Web application is being tested (a similar kind of testing is called cross-browser testing)
- The type and model of the video adapter (when testing games it is very important)
- Application work at different screen resolutions
Versions of drivers, libraries, etc. (for JAVA applications, the JAVA version of the machine is very important, you can also say for .NET applications regarding the version of the .NET library) etc.
Testing procedure
Before starting the configuration testing, it is recommended:
- Create a coverage matrix (the coverage matrix is the table in which all possible configurations are entered),
- Prioritize configurations (in practice, most likely, all the desired configurations will not be verified),
- Step by step, in accordance with the set priorities, check each configuration.
Already at the initial stage it becomes obvious that the more requirements for the application work with different configurations of workstations, the more tests we will need to conduct. In this regard, we recommend, if possible, to automate this process, since it is during configuration testing that automation really helps to save time and resources. Of course, automated testing is not a panacea, but in this case it will prove to be a very effective assistant.
So, in general we have:
- configuration is called compatibility testing of the product (software) with various hardware and software
- the main objectives are to determine the optimal configuration and check whether the application is compatible with the required environment (hardware, OS, etc.)
- automation of configuration testing helps to avoid unnecessary costs
Stay tuned and continue to learn useful details about the software testing with us.