I would think this is not a good practice, since that would lead to duplicate alarms on a normal day-to-day basis (the same alarm asserting on both landscapes). The only way I could see this working for your Operators is if you excluded that other Active Landscape from their Alarm Filter, and only included it during a Failure event like you described - but that's a slower manual process and prone to error.
It's very obvious to an Operator when the Secondary SS hasn't taken over (Red vs. Yellow border in OneClick), so your team should be able to notify you promptly if something doesn't look right. There are also a few OOtB Alarms that Spectrum will assert if something goes awry with the Fault Tolerance (ie: contact lost to Secondary SS, alarm synchronization failing, etc.)
I suggest you 1.) Run through a few scheduled failures of the Primary to show the Secondary SS will take over correctly, and then 2.) Just handle any issues with the Secondary as they arise. For what it's worth, I've yet to have a problem with the Secondary SS.
A few things to consider:
- How frequently you run your OnlineBackups dictates the "freshness" of the Secondary's database of devices/models
- On the Secondary, increase "max_event_records" in the $SPECROOT/SS/.vnmrc so the Secondary retains a longer history of events to then sync back to the Primary
- On the Secondary, set "secondary_polling=yes" in the $SPECROOT/SS/.vnmrc so the Secondary is "Hot" and can takeover immediately