We are monitoring IBM MQ servers with CA APM PowrPack agent installed on a separate sever than where APM EM is installed. We are monitoring 2 MQ servers that are running on fault-tolerance mode. When the queue manager on server1 fail over to the queue manager on server2, the PowerPack agent goes on reporting the queue metrics for the inactive queue manager(server1) instead of the active queue manager(server2).
This is what I see in the logs:
2/20/19 01:35:11 PM MST [WARN] [com.wily.powerpack.websphereMQ.agent.MQMonitor.trace.QueueBrowserUtil] QueueBrowserUtil.processMQException Could not connect to configuration instance: QMDIST1|srpwmq35. Reason Code: 2009 MQRC_CONNECTION_BROKEN Will try to connect to configuration instance after 60 seconds
I tried changing the shared connection setup for QMDISD1 (srpwmq05/06) and failed the queue manager back and forth.
It changed the MQ error message in the log, but not the behavior of the monitor.
2/21/19 03:08:01 PM MST [WARN] [com.wily.powerpack.websphereMQ.agent.MQMonitor.trace.QueueBrowserUtil] QueueBrowserUtil.processMQException Could not connect to configuration instance: QMDISD1|srpwmq05. Reason Code: 2161 MQRC_Q_MGR_QUIESCING Will try to connect to configuration instance after 60 seconds
2/21/19 03:08:04 PM MST [ERROR] [com.wily.powerpack.websphereMQ.agent.MQMonitor.TracerDriverThread] MQMonitor: For configuration instance QMDISD1@srpwmq05 and the drivers(manager,manager) an error occured in sending query to MQ. The target MQ (srpwmq05:1414) may be down. Reason code 6124 MQRC_NOT_CONNECTED
Thank you so much in advance.
So the connection cannot be established or is available. Looking at netstats or a packet capture may tell you what is happening. Failing that , please open a case.