Software Response Evaluation Methodology

One of the most important characteristics of corporate software is response time (AKA speed). And it is quite difficult to achieve, since all corporate software solutions are multi-user, and operate on very large data-sets. Of course, everyone would like to have every action return instant results, but that's impossible. Here is a methodology by which a company should set application response time targets and evaluate the software against them.

Because delays in response at some if not all points in a software are inevitable, one should have a realistic stance about it. So, what is the required response time of an application? - It is the amount of time that the user CAN wait, not the amount of time the user WANTS to wait!

Here is a methodology to evaluate acceptability of software response.
The methodology has 6 distinct phases:


I. Identify which functions will be using this software. For the purpose of example, these functions use the evaluated software.

  1. Customer Relations Management
  2. Direct Sales
  3. Service Provisioning


II. Identify activities in each function which will be using this software
. Enumerate the actions performed by the above functions, and add a row with the following information for each action
  • Function - The name of the function that is the owner of the action
  • Action - The name of the perfomed action - as a busines activity
  • #Times Per Day - Number of times this action is performed per work day by one employee
  • Avg. Perf. Time - Average duration of each performance of action (in seconds)
  • Des. Perf. Time - Desired duration of ech performance of action (in seconds)
  • #Users - Number of persons performing this action during their work day
The filled table will show you what activities will happen on the software during a typical work day, and by how many people. This is essential for a realistic evaluation

The Avg. Perf. Time will give you the maximum expected response time, and the Des. Perf. Time will give you the optimal response time of the software for that action.

NOTE: You may want to reduce the received numbers by 25% for the evaluation, since all software packages tend to gradually slow down during usage, and this will give you breathing room. This reduction is a decision of the entire evaluation team and must be decided per project.


III. Identify the cutoff point for the test activities and acceptable variation of results
- With the table properly filled, you have a set of realistic usage tests for response evaluation. Actually, the properly filled table will have a huge number of activities, so you need to set a cutoff point - which set of activities will simulate a real situation, but without going overboard.

Usually, you should discard actions that are infrequent (1-2 times per day) and don't have more then 5 minutes as Avg. Perf. Time, as well as actions that are deemed less important. This is a cruel part, and is best done with department managers

Also, you should define what is acceptable result. It is unrealistic to expect that the results will actually be 100% match to your targets. An example acceptable variation would be:
  • at most 20% of the performed actions during the evaluation are above the desired response time but below or equal to the average response time.
  • at most 5% of the performed actions s during the evaluation are above the average response time


IV. Prepare real amounts of data - A common mistake of the software developers is that they test their systems on a laboratory set of data, which is usually far from the real situation, both in volume and in quality. Evaluation should be performed using a copy of production data, possibly anonymized for security reasons.


V. Call in the testers and make the test - With the evaluation activity set and the data-set to evaluate on, hire a testing team to run the test. The best evaluation is the following:
  • Runs an automatic test, run by programs simulating users, since they will measure the actual time of EVERY action down to a millisecond, and it's easy to analyze the results. To avoid errors, run the test at least 5 times, discard the best and worst results and average the remaining
  • After this, run a test with real human users, and task them with timing and recording each of their actions. Then average the result of this test with the statistical results of the automatic test


VI. Analyze results and make a decision - In the perfect world, the result will be pass or fail and you will buy the software or move on. In reality, you will have great response times on some actions and horrible on others. And naturally, office politics and strategic interests will interfere with a cold decision. So here is a rule of thumb approach:
  • If more then 25% of performed actions during test are above the respective average response time, return the software for complete reworking before re-evaluation
  • If less then 25% but more then 10% of performed actions during test are above respective average response time, continue the activities of further evaluation or preparation for implementation, but insist on a re-test before final purchase to reach the expected acceptable variation
  • If less then 10% but more then the acceptable variation of performed actions during test are above target response time, continue the implementation, but insist on a re-test before go-live to confirm reaching the target variation

Naturally, this methodology could be expanded and amended with other elements. But this version is quite capable of producing a very realistic results, very close to everyday working conditions under which the software will function


Talkback and comments are most welcome

No comments:

Designed by Posicionamiento Web