How are you doing?
We have a Test case with 25 steps which has following flow
a) Read a parameterized XML file
b) Prepare the XML message to be published to a Queue1
c) Call a subprocess with approximately 2 steps to publish to Queue1 (JMS JNDI step)
d) Validate the response message
e) Read a parameterized XML file
f) Prepare the XML message to be published to a Queue2
g) Call a subprocess with approximately 2 steps to publish to Queue2 (JMS JNDI step)
h) Validate the response message
i)Read a parameterized XML file
j) Prepare the XML message to be published to a Queue3
k) Call a subprocess with approximately 2 steps to publish to Queue3(JMS JNDI step)
l) Validate the response message
m) 4 Data base query steps to validate the data integrity
n) Custom Report generation step
Expectation: We have to run this Test case for 1000 times to inject those many records into our environment.
Considered Approach: Created a staging doc with “Run N Times” as the Load pattern with following settings to inject 1000 records
Our simulator supports 256 Virtual User instances.
On staging the Test case only one Virtual user is created and the Load report does not show the individual step execution details.
I do not see any data inserted.
Please find attached supporting screenshots for reference.
Please let me know, if I am missing anything?
Thanks in advance.
I assume this workflow works when run using ITR.
A few questions:
1. What does the Coordinator & Simulator logs contain around the time the load test was started?
2. How is your data set defined in the workflow?
3. Do the DevTest components (workstation, coordinator & simulator) all have access to all the resources being used by the workflow? (XML files, Messaging infrastructure, Database, filesystem)
Thanks for the response, much appreciated.
Yes, the flow works absolutely fine in ITR mode
1. What does the Coordinator & Simulator logs contain around the time the load test was started?- Unfortunately I do not have access to these logs.
2. How is your data set defined in the workflow?- The data set are local- Excel spreadsheets
3. Do the DevTest components (workstation, coordinator & simulator) all have access to all the resources being used by the workflow? (XML files, Messaging infrastructure, Database, filesystem)- I am sure about the Workstation, assume even Cordinator and Simulator has the required access.
From your prior experience what could be the possible reasons, I can start my analysis from them and engage our infra support team to get the details like logs.
I would start with looking at the Coordinator/Simulator logs & file system to check if they report any errors & if all the required assets are present on the simulator instance.
I will engage with my environment support team and do the analysis.
Will keep you post with the findings.
Is it possible to turn this on its head a bit - read all the data and loop over it in a single test instance rather than repeating the test (use a counted data set to control the number of executions)?
I would expect performance to be considerably faster.
If a more parallelism is required then a hybrid approach could be tried.
Thanks for your inputs.
At the moment we are considering Data driven approach for injecting the data, as you pointed.
Could you please elaborate on the Hybrid approach, never implemented such a strategy.
The point here is that some things are expensive things to do when running performance tests - in the case out outline every test will read the input data from a file every time it executes.
There are a couple of ways to make this faster:-
If your data is the same in each execution, add a "Numeric Counting Data Set" to a step after you have read your data. Set this to have the number of times you wish to execute. The key here is that the LAST step should execute the step with the data set associated with it, and the action of exhausting the data set should control your loop termination. There can, of course, be any number of steps before the data set.
By way of illustration, here is a trivial example that will execute two steps 10 times each and then end the test.
The second way is very similar to that above - but in this case use a data set that reads from a file (for example "Read Rows From a Delimited File" or "XML Data Set") and loop over the contents of the file until it is finished.
In these ways you can avoid the spin-up-spin-down of tests that have an expensive startup.
I hope that helps
Thanks for the response Dave
I shall try out these options and comment with the results.
Did Dave's recommendations help you out with this issue?
Yes, Dave's recommendation has worked. Sorry was the delayed response.