First of all, load testing is a type of testing.
It is important to start not by choosing a tool, but by analyzing the features of the system under test and scenarios for its use.
Most of the general approaches and testing practices are also applicable here, and we will start with one of them.
The cost of mistakes often increases over time. The later an error is detected, the more functionality is created that depends on the problematic code. Dependent functionality may also need to be corrected, which often leads to new errors. Also, the old code is more difficult to analyze when looking for the cause of problems. As a result, the amount of work is growing.In general, the more often a product is tested, the faster errors are detected and their price is lower. Regular Background testing is a good choice. It is ideal to complement it with regression Load testing and monitor the dynamics of indicators as often as possible.
Any application can be used in various ways. Moreover, over time, user preferences and behavior change. But, usually, it is known about the most common scenarios and features of use.
Some examples of what may not be taken into account in your scenario:
Using only constant parameters for queries can hide problems in the service under test. At least in this case, less unique data can be stored in the backend. Also, with identical data, caching and processing algorithms on the server may work differently. As a result, an unrealistic scenario will be tested.
A similar problem may occur with a small set of test data for parameterization. And sometimes the data must be guaranteed to be unique within the iterations of a single user or even the entire test.
Often the service returns data (tokens, identifiers) that need to be transmitted in subsequent requests for correct operation. Using such data for parameterization is called data correlation. The absence or incorrect correlation will lead to functional errors during the test, the results of which may be completely incorrect.Use sufficiently large test data sets or generators.
A load testing tool, this is also an application. It can also be overloaded when testing with a very heavy load. In this case, even if the whole testing process goes smoothly, its results may be completely incorrect.
In order to correctly simulate a high load, agents (generators) are usually used. It is important to use a sufficient number of agents to simulate traffic. As a rule, up to one thousand virtual users can be simulated on one agent, but this figure strongly depends on the scenario.Some manufacturers of load testing applications are cunning, promising in advertising materials up to hundreds of thousands of virtual users on one agent. Yes, this is possible, but only in the case of only one connection (thread) within a single user and with long pauses between requests.
If you are running agents on your own hardware, make sure that all applications not participating in testing are stopped. Antivirus software also consumes resources, sometimes it has a strong impact.
In the field of load testing, there is a common approach when functional errors are completely ignored with the slogan "We are testing performance, not functionality".
But is a system that allows a lot of functional errors under load the one that the user expects? Do we need a service that performs only one function at crucial moments: error generation?
Another assumption that is best avoided is the control of the number of errors in total for all queries in the script. As a result, in a scenario of five requests, we will not be able to distinguish when every last important request or each of the requests fails, but in 20% of cases. The number of errors is always 20%, but in the first case, the service always fails to perform its tasks, and in the second – only 1 time out of 5 attempts.For all key queries, add checks for the correctness of the returned data and divide the control of the number of errors by individual queries.
If your service's clients are located in different parts of the globe, testing from different geographical zones is of great importance.
Factors influencing in this case:
A high load can be created in the form of many virtual users, connections, or a number of requests.
Although in some cases it is not important, but regardless of the total number of requests, the number of active parallel users makes an additional contribution to the load. This manifests itself in the form of the use of additional memory for caching data of different users, database locks, etc. Also, using more connections to simulate a constant number of requests creates a large load, which in some cases can be crucial, including a multiple reduction in the maximum supported number of users.The ideal test scenario would be one in which all three components of the load are close to the values from the real world.