We are testing Web Services where we set the latency/ Think Time Spec for each response we are sending back.
In case of performance testing after sometime latency set in response stopped to work and picking default value which is causing high CPU utilization.
Please suggest what could be the reason for the same.
I am going to move this into the questions so that you get more of a response.
Hi Muddy, can you provide us with more information?
- Version of DevTest?
- Which OS are you running? How much memory is allocated to the VSE? How many CPUs and Cores are on this machine?
- When you say high CPU, what % CPU are you seeing that is causing you to be alarmed?
- Are you using the DevTest Derby DB or have you connected DevTest to an Enterprise DBMS?
- Do you have logging set to WARN or INFO?
- Do you have a Performance VSE license and, if so, have you configured lisa.vse.performance.enabled=true?
- What TPS rate are you seeing when the Think Scale appears to reset? What is the TPS rate you are trying to achieve?
- How many services in the VSE are being consumed during the performance test?
- What is the Capacity setting on the offending service?
- What are you setting the Think Scale to? 0%, 50%, 100%, 150%, etc.
- How long does the test run before you start seeing reduced throughput?
- What type of Transport protocol are you using (HTTP, JMS, TCP, etc.)?
- Do you have any customizations (scripts, additional steps, custom extensions) or are you using Live Invocation?
Typically, we want to see high CPU utilization. If the CPU drops, that is an indicator that the services are being interrupted by some other event or activity such as file I/O or network latency.
Support might have a patch where think time is not honored after some time. Can you please let us know the version you are facing this issue?