Hi Aditya,
In addition to Aditi's answers above, here is some more information that may help.
In context of generating agile designer flows from test data model in Test Data Manager, from a shredded database (created using shredder utility from an XSD):
How to generate a common Test Case Optimizer (Agile Designer) flow, with all shredded tables included?
Typically, flows are created to design test cases, and are usually derived from functional requirements or existing test cases. Can you tell us what is it exactly you want to achieve so we can determine if it's possible or recommended?
For that, can we import multiple tables into the Data visualizer tool? If not, can we create a view with all the tables inclusive, using the TDM product suite?
You cannot directly connect to two tables at once with DataViz, but you can connect and sample a view as you suggest. The following is a method for creating such views with the TDM suite, but you can also always issue the standard SQL commands to achieve the same.
https://docops.ca.com/ca-test-data-manager/3-6/en/provisioning-test-data/discover-personally-identifiable-information/define-cube-dimensions-and-create-the-view
After creating the Test Case Optimizer flow, how can we integrate it with TDM?
You can define test data criteria automatically per test case in ARD, then send this data automatically to TDM, which will create the required Test Data entities into the correct environments ready for test execution against them. This could be across multiple databases and be very complicated entities. For example, my test might specify I need a person with an AMEX credit card. DataMaker will then go and build that person and their credit cards in all of the relevant databases, essentially providing the criteria for any parameterised makes existing in DataMaker.
The same test data criteria can be used to ‘find’ data that already exists across multiple environments which matches the criteria of the test, and reports back that data so it can be used in any test scripts. See Test Matching for more information.
Any data used within the scripts themselves can be highly dynamic, using all of the DataMaker (a part of TDM) synthetic data generation functions so you can have rich, referentially integral data inside your automation scripts, rather than it being hard coded.
You can store ARD flows within a TDM repository, which many users can access. This has a locking system which allows many users to collaborate on a project consisting of many flows without constantly emailing flows to each other or overwriting each other’s work.
Please let us know if you have any more questions or need clarification on anything discussed above.
Best regards,
Taylor