Our Spectrum Data base backups are approaching 1GB on one of our Spectro Servers. At one time we had 2,000+ servers in Spectrum and have since deleted them however the DB did not reduce in size (I didn't expect that it would). Is there a way to compress the database? If the DB can be compressed would there be an associated performance gain as well?
If the "Backup Compression" option is enabled in OnLine Backup configuration, the backup files are compressed using the compression utility before being written to disk. Compressed files are saved with a .gz suffix appended to the filename. If disabled, files are saved uncompressed. The default value is enabled.
Host Configuration (NCM) backup increases the SSdb size.
If you run a lot of discoveries you could also be creating AdiscResultSet models which store the results of each run and it could also increase the SSdb size.
Try the database_tally script and look at the number of the "HostConfiguration" and "AdiscResultSet" models.
We don't run a lot of scheduled discoveries as we rely mostly on Trap based discoveries. We are compressing the backups however my concern is the backups are large and did not reduce in size even after deleting 2,000+ server models. I was wondering if there was a method or process to compress the database before it is backed up. I'm not a DB guy but I would imagine it would be eliminating the white space or something like that. Is anyone familiar of such a process for a Spectrum database?
just some guesses/hints:
- have the servers really been "destroyed" or just "removed"? You might use Locator -> Models -> By Model Name to verify if there are models left that are related to the "deleted" servers
- have you checked for NCM as stated above? potentially several thousand devices with 25 configurations each could take quite some space
Hi, if you use NCM, check the number of stored configurations per device and perhaps you can try to reduce it.