While Simulations getting bigger and bigger, management of all requests and responses in the VSE-Image is very difficult and confusing (see image-01). For example: If you have many requests and you have to delete an unneeded request, the identification of the corresponding request is not very easy. Is there a trick how to give a request a "Name"?
There is just a ID for the identification (changing when new generated). Is there a way to deposite a comment for each request or/and response?
Any ideas? Best practices? Hope everything was as far as understandable.
Thanks in advance
Even we same concern .. the biggest limitation here is we cant find/search/query a transaction in Workstation - VSI section.. we have recently migrated a legacy stubs into LISA .. where number of transactions from stub to LISA is more than 500 .. So now users wants to search their record using some kind of search method but no option to search inside VSI.. (though you can see some kind FIND option it wont help you at all in any way) .. I have raised a support ticket as well. It would be good feature in terms of devtest product being used heavily in some cases..
Ok, looks like I have to move to a feature request! Is there any response / statement of CA regarding this question?
An approach I have used in the past is to add a Use Case description to each transaction so developers and testers could quickly identify their response scenarios.
It is up to you to decide whether or not this approach is viable given your specific requirements. For example, if you need to use learning mode, the approach may not be viable because Live Systems do not contain the argument, explained below, in the Live Response.
When a transaction enters a VSM, most services immediately parse the incoming transaction to construct a set of arguments used in the response selection process. We can extend this concept by adding an argument whose sole purpose is to provide a description of the Use Case scenario for those individuals that need to look at a VSI to determine response Use Cases.
1) Add an additional argument to the request.
ParameterList argList = lisa_vse_request.getArguments();
argList.addParameters("DevTestUseCase=someDescription"); // the new argument is called 'DevTestUseCase' but can be anything
lisa_vse_request.setArguments( argList );
You can add logic such as 'If the operation is "XYZ", add the argument, else skip adding the argument" to achieve your specific requirements.
2) Now you have an argument (key/value pair) added to each incoming request
If recording, add the Scriptable in the Recorder when you set up your DPHs and your Service Image picks up the additional argument at Record time
If an existing service,
Add the DPH,
Open the VSI
Manually add the argument to each operation.
Just take care to ensure that you use the exact same argument name in your VSI as your Scriptable adds. (In this example, DevTestUseCase.)
3) In the VSI, set your argument for each operation to compare on ANYTHING so the argument does not participate in any comparisons. You can use the Mass Change process to do this after adding the argument to each Operation's META.
4) Edit the Value associated with your argument to reflect the Use Case that each specific response applies to. Your value is freeform so as the Use Cases change, so can the description.
Now, the response Use Case is easily identifiable for each transaction in the VSI.
Here is an example VSI. The associated VSM has the Scriptable DPH to add the argument called DevTestUseCase using the code above.
Thanks for you detailed answer Joel.
Hmm, I really hoped that there will be a “better” solution. I’m quite not happy to add fake arguments to all request / responses. I think that there is no better solution out there at the moment (if yes I’m still very interested in :-)) Maybe the data-driven approach in excel would be a more supportable / maintainable solution - any experiences?. Thanks again for the response.
You will not get an argument from me RE adding args. It is a decision that each team must make based on their desired outcome(s). Data driven approaches externalize the data, but may or may not adequately document service behaviors.
I believe it would be good for you to create an Idea for the product team's review.
If one considers the CA solutions (ARD, TDM, DevTest, RA, etc) as an integrated DevOps ecosystem, it seems reasonable to document service behavior at the use case flow / coverage level (e.g., ARD). Perhaps, this documentation flows down into each asset. CA is improving this ecosystem in each respective product release.
Meanwhile, perhaps someone else will chime in with other alternatives.