Has anyone tried replicating the policies across different gateway clusters? The idea is if the policies are same across clusters, it becomes easier to manage traffic at the load balancer level(by enabling/disabling port) as required, instead of dynamically adding/removing processing nodes.
You can use Gateway Migration Utility (GMU) to migrate policies between gateways.
Migrate Gateways - CA API Gateway - 9.0 - CA Technologies Documentation
Thanks for replying, but my question was not about migrating the policies. It was about keeping the policies in sync across different clusters using something like a mysql replication, where we can automatically sync the changes across. Something to replicate only the published_service table and any associated tables if any to other databases.
You could have one cluster across datacenters and point the load balancer health check to a service (instead of the built-in /ssg/ping or ICMP ping); this way you can cause the health check service to fail for the nodes that you want the load-balancer to remove from rotation and your policies stay updated in both datacenters regardless of to which node you connect when publishing changes.
It's good to hear from you again. The problem is that, we are limited by the number of nodes in a cluster(suggested maximum of 6 or 8 when i last heard from CA and the performance goes down when the number of nodes are higher.). So we have 4 separate clusters in each data center based on the type of protected services and the traffic.
Thanks & regards,
MySQL only supports two databases sharing master-master replication, however you could setup the database nodes of the other clusters to be slaves to the 'primary' (for lack of a better term) cluster's database of their local datacenter.
You would need to have discipline to only make changes to cluster that spans the datacenters (the others could be in a single datacenter). Though you might end up with collisions on the audits and metrics. You can off-box the audits (or disable them) to prevent those, but not sure if you can disable the metrics. Also you would want to keep an eye out for the tables with cluster data in them, but I think those should be ok.
It might be easier to just run a sql script to push the relevant tables' data across to the other clusters' databases, and if you export/import to sync them up first then all the index keys will match so you won't have to worry about any broken object references...
You could script the whole thing using CMT/WSMan (GMU) as Gopinath suggests but personally I think it may be easier to just deal with the database because if you can get master-slave replication or an event trigger working to your non-primary clusters then it becomes fully automatic... You may also need to keep an eye on the sql bin and reley logs...