I have this situation.
*uim-server (nas): With principal nas
*hub-a (nas): When deploy the nas probe, it create 2 subscriber (nas and alarm_enrichment), and this configuration its function ok, but when I check the console IM, see that are create 2 alarm with 2 nas.
I don't know if this is normal, because on the portal UMP, only see 1 alarm.
I think that this problem can be resolved with forwarding and replication option (nas probe).
I am trying set up the option forwarding and replication of nas to nas but the probe fail, after a while, the nas probe are in the down state and I don't know why.
Note: before of configure I disable the subscriber (nas - alarm_enrichment) and configure the replication.
Try restarting secondary hub robot and see what happens.
if the restart does not resolve it then consider:
forwarding/replication will not prevent nas from starting
the real problem is nas at the secondary hub fails to start
the rest about forwarding/replication only serves to confuse and distract from the real problem
set nas to deactivate
set alarm_enrichment to deactivate
after it has both pid & port activate nas
if nas still doesn't activate set it for log level 5 log size at least 1000
do the above steps and check the logs
if the nas log only shows max restarts then do it again with controller set for log level 3 and check its log
another thing to try is go back to the default config
rename the two local .db files and in the cfg set all the rules and profiles to be disabled
if it starts things can be added back till the problem is found
Hi Yu_Ishitani and DavidM
Thank for your time and support.
What is the best practice ? for this situation,
uimserver - hubsecondary - hubsecondary
nas <--- nas <--- nas
*Configure forwarding/replication or standar configuration with the 2 subscribers (nas and alarm_enrichment)?
Hi, I don't think there are specific secondary nas guidelines.
Someone does and someone does not.
One specific senario you would utilize secondary nas is ToT (Time Over Threshold) feature.
The feature need alarm_enrichment on secondary hub so you'll see nas there as well.
Hi before configured forwarding/replication againg and restart the server. The problem is the same.
I configure the logs on nas like loglevel = 5 | logsize = 100000
I see this line on the logs ---->
Dec 11 11:18:48:700  nas: SREQUEST: _close ->192.168.90.13/48002Dec 11 11:18:48:700  nas: Failed in attaching to HUB, retry #101 ----> This appear in the logs, from #01 until #101, to the 101 the state of the nas pass to redDec 11 11:18:48:968  nas: SqliteExecuteCallback: sqlite3_finalize returned:0Dec 11 11:18:48:968  nas: SqliteExecuteCallback: sqlite3_finalize returned:0Dec 11 11:18:49:157  nas: SqliteExecuteCallback: sqlite3_finalize returned:0Dec 11 11:18:50:158  nas: SqliteExecuteCallback: sqlite3_finalize returned:0Dec 11 11:18:51:159  nas: WaitLock: CARENATAHUB: _replPostQueue
I attach the logs maybe you can see something different and help me with this.
Just want to make sure my understanding of your configuration is correct.
primary hub - nas 2 - nas 3
does nas 3 only have communication to nas 2 and then nas 2 to primary hub
are nas 2 & nas 3 setup for HA and both communicate to primary hub
based on the log it looks like nas can't connect to its queue and the messages are typical to what shows up in a HA setup.
My configuration is
primaryhub - secondaryhubA - secondaryhubB
nas <--- nasA <--- nasB
The nasB only have communication to nasA and nasA only with nas (primaryhub).
the (primaryhub) compund to 1 server (window).
the (secondaryhubA) compund to 2 servers in cluster of microsoft.
the (secondaryhubB) compund to 1 server (linux).
Next most likely cause is either something blocking nas access to the port or a problem with the local db files.
To test the later
rename database.db & transactionlog.db
If it stays running there was a problem with one or both of those dbs.