When to set up this application performance – Part 2

  1. Constant attention to Performance
  2. Performance Objectives
  3. Documents
  4. Project Phase
    1. Phase Analysis
    2. Design phase
    3. Development phase
    4. Testing phase
    5. Go Live and Post Go Live phase

Once the scenarios are catalogued it is either time sensitive scenarios where we need to get the data with a good response time, or volume sensitive and we need to verify that we can manage them in the given time.

Scenarios should be divided into two categories: those that will be executed during the day during peak hour and those that will be executed at night (e.g., batch processes). For the first category you will have to plan, after developing each individual scenario, the phase where these scenarios are run simultaneously to simulate the load and understand how the system behaves with this load.

For the second category you must measure the time required to process the volume you expect according to the scenario and then calculate based on the windows you have if you have enough time for this or not or if you need to make some optimizations on the processing

Knowing the volume of data from this phase will have an impact on the developer’s way of coding in extension and considering the notions of volumetry in its developments.

An Excel file can be set up for follow-up, for example with the following:

  • Scenario tab: Day-to-Day (yes/no), Name of the scenario, description (give details), additional info, delay (when the scenario runs), Data volume, Performance objectives, Task Recording: list of tasks saved to unroll the scenario (how scenarios will work), allows for better description of the scenarios
  • Master Data tab: with subscription estimate data: Number of legal entities, customers, suppliers, stored and unstored products, users. Try to define the situation in the past and future.

During the analysis phase it is important to implement the following elements:

  • Educate and sensitize teams to performance: how to detect pbs of perf during tests.
  • Define the environment to use for performance testing.
  • Ensure time has been allocated and plan for performance testing.
  • Consider non-functional performance requirements.
  • Identify critical scenarios that would be candidates for performance evaluation.

During the design phase, essential elements to be implemented are:

  • Design an architecture that focuses on business performance and best practices (such as putting non-critical work in the job at night), implement a thread approach in batch design using a large volume of data, to consider the concepts of volumes in data integration.
  • Define workflows to investigate performance issues so that teams are somehow prepared for the worst.

Now that the data is linked to functional critical scenarios and the characteristics of the data known before development: it is good to have all these elements documented and communicated to the developers.

During development, it is important to always keep in mind the goals set for each critical scenario, as this helps the developer have this element as a dev requirement instead of having it managed after the fact at the time of testing.

For example, using the task recorder to play this feature with the use of trace parser during the development phase to see if the trend of the tests is suitable in relation to the objectives set: If there is a problem the developer can go to the X++ or SQL side code and see what is possible to optimize. Tests with realistic configurations also allow to find more quickly where the perf pb comes from.

You need to have a complete view of the performance and not only focus on HMI performance but also on reports, integrations, or batch treatments.

The developments are complete, and the developers have taken performance objectives into account in their developments as a functional requirement.

It is a clever idea to think about how all the elements developed will come together to form a concrete scenario. Once you have this design ready, the next step is creating real scripts.

The next step is to deploy the scripts so that you can run a performance simulation test: this is to be done on a dedicated environment, depending on number of users who will test and the number of transactions per hour (volume) you will have to choose the specific environment.

When you run the tests start, with simple tests and gradually increase complexity: a scenario, a user to start then increase the number of scenarios/ users. If there is a performance problem, the more scenarios and the harder it is to trace the problem.

Depending on the problems encountered, this can generate a considerable number of rework and optimization that may subsequently cause a delay in the schedule of your project.

Finally note in a report the test results, the optimizations made so that if later a new performance problem takes place, you do not start from zero.

During the testing phase it is important to focus on:

  • Identify several types of performance: form that block and crashes, interrupted interfaces and queued message, power bi report that does not load, batches that take days to run rather than hours.
  • Collaborate with the right stakeholders to investigate and resolve performance issues.
  • Receive and apply the corrections provided.
  • To be concerned about the scalability of the solution in terms of load and number of users

Go Live and Post Go Live phase

It is important to focus on the following:

  • Verify that in the event of a performance problem, tools, processes are in place to investigate and effectively resolve performance problems, that all stakeholders: partners, ISVs, Microsoft are mobilized.
  • Ensure that the risks of dealing with performance problems have been detected and resolved prior to go live.
  • Have positive communication about the solution.