I have a Spectrum 9.3 in fault tolerance and distributed environment.
The Notifier is running in one of the SpectroServer principal, and send the alarms for all landscape. I´m not sure what is the best change to configure the Notifier in the redundancy situation.
I have saw other people that have put the "if " condition in the scripts on the secondary SS, and if the hostname in the alarm is the first SS, the script of Notifier say "nothing to do". I think this could send for the alarms in the other landscape the same mail from the primary and secondary SS.
I have three SANM applications, with three SANM policies and three setscripts in the principal SS. In the secondary SS I copied the NOTIFIER directory from the primary SS. I did not put to run the three notifier process in the secondary because I think this could send the alarma mails for duplicated as I already said. I need to found the way to the secondary send mails only when the SS principali is down, and its work for the other landscape, and with the three application SANM...
May be you can use the precedence attribute instead of hostname
1.Primary spec DB will have precedence 10 (attribute 0x12c0a in every model)
2.Assume your secondary Spectrum is precendence 20
3.On both servers add this to $specroot/Notifier/.alarmrc
4.In setscript and clearscript in the bit just after
if [ "$SENDMAIL" = "True" ]
On primary add this -
if [[ "$SANM_0X12C0A" = "20" ]]
echo "SS Secondary is running"
echo "Precedence = $SANM_0X12C0A"
On secondary add this
if [[ "$SANM_0X12C0A" = "10" ]]
echo "SS Primaryis running"
save the set script and recycle Alarm Notifier
whenever an alarm is generated ,the model in the DB is checked and the attribute 0x12x0a is read - if its 10 (primary server precedence) then Primary Alarm notifier sends the email and secondary will write a line to the notifier log file saying primary is running
If 0x12x0a is 20 then Seconday server Alarm notifier will send the mail and the primary would write to the notifier file saying secondary is running
Thank you, your idea has been very helpful for me!!
We handled this a little differently. We wanted to account for the case where the AlarmNotifier process could die or fail even when the SpectroSERVER process was still running (a situation we've seen on numerous occasions, particularly with default type logging when NOTIFIER.OUT exceeds 2GB). Also note, we have a very large distributed Spectrum environment (over a dozen primary and over a dozen fault-tolerant SpectroSERVERS).
We configure the same custom Notifier scripts (SetScript, etc.) on the designated Primary and Secondary (Fault-Tolerant) Spectrum systems. Inside the scripts is a check to look for the file "$HOME/Notifier/.SpectrumAlert.stop". If that file exists, they will log the alert, but not actually generate a ticket to our ticketing system.
The primary system should never have that file, unless we're manually placing it for testing. The secondary will always have that file, unless there is a problem with the primary. We verify it by running a cron job script every 5 minutes that does the following:
Because it runs via cron every 5 minutes, it will automatically enable or disable alerting between the primary AlarmNotifier and the secondary AlarmNotifier without us having to intervene. It also covers every failure scenario we could think of, and helps