My group is tasked with monitoring just over 20 external customers. These customers are retail store chains with any given number of servers that we monitor.So at our side, we have our UIM-server (with UMP), and in addition an SQL-server and a CABI-server.
At the customer side, there is a number of robots and a customer hub.
The issue we are facing is that some of the customers a running webshop-functionality and we monitor parts of that functionality. And if, for some reason, the hub at the customer goes down we will not get any alerts related to webshop.
So I've been thinking about setting up a secondary hub at the customer so that alerts are sent even if the primary hub goes down.But I'm a little uncertain as to have this should be done.
Is it enough to just install a secondary hub on one of the robots, with it's own tunnel certificate and alarm queues and make sure that the controllers are set to automatically look for a secondary hub in the subnet (alternatively specify the secondary hub)?Or would it be better to set up a secondary hub and use the HA probe?
RegardsEspen B Hanssen
A more traditional route before HA probe was around is the following and if you want to ensure there is absolutely no lag or software failure with HA probe.
You can deploy 2 Hubs if your using Linux make them VM(s) you can get by with 2xCPU(s) and 4 Gigs of ram if your doing minimal QoS and just worried about alarm data ( I can push 3k Robots easy using those specs ) . Remember Hub sizing isn't a clean cut thing as several variables go into affect i.e how many alarms are being pushed and QoS metrics etc.
On the customers robot configuration you need to set the following inside the <controller> section :
hub = <name of hub i.e hub01> hubip = <hub ip> hubport = <hub port> hubrobotname = <hub robot name>secondary_domain = <UIM Domain> secondary_hub = <hub name i.e hub02> secondary_hubip = <hub ip> secondary_hubport = <hub port>
Now you can also just do the following keys as the controller will restart itself if it finds the hub name different and or hub robot name different and write that to the configuration file before the restart.
hubip = <hub ip>hubport = <hub port>
secondary_hubip = <hub ip>secondary_hubport = <hub port>
After you have both your hubs up and running you can do post or get/attach queues from your primary hub to both hubs (hub01 and hub02) if your using post queues and HA probe there always can be a potential delay on when it brings up queues which you have to account for or you may lose the alarm / qos data. . There is limitations though on the amount of queues and or the amount of tunnels to take into consideration and both Linux and Windows Hubs have different limitations and not exactly the same and depending the amount of hubs and the size of the infrastructure some deployments need a relay hub to consolidate everything.
I hope this helps!
Most clients would setup the HA probe so that when the client primary hub is down the queues get activated on the HA hub to replace the ones that are there.
If you had a large client environment that needed two hubs that where there for load balancing you could use the other for each set as the secondary.