DX Unified Infrastructure Management

  • 1.  SQL statement to delete data in bulk

    Posted Sep 09, 2010 01:25 AM

    I'm looking for a SQL statement I can run that will delete all data in the NMS SLM db that hasn't been updated in like 3 months. For example, delete any QoS object whose last sample time collected was more than 3 months ago, meaning the data used to be collected but isn't being collected anymore. Using the SLM is a nightmare for this. Any ideas?



  • 2.  Re: SQL statement to delete data in bulk

    Posted Sep 13, 2010 03:15 PM

    I don't have a way to do this, but I think SLM works pretty well for it if the number of QoS objects is limited. In large environments, I would agree that the SLM is painful for bulk deletes. However, the relatively new option to show only the QoS objects for certain types makes it a little better. If you really need to do a bulk delete in SLM, you can do just certain types each time to keep the list to a reasonable size.

     

    -Keith



  • 3.  Re: SQL statement to delete data in bulk

    Posted Nov 11, 2010 10:47 PM

    I have noticed in large databases that the SLM columns do not filter correctly - and the deletes do not refresh there status very well.



  • 4.  Re: SQL statement to delete data in bulk

    Posted Nov 11, 2010 10:53 PM

    I opened a case on this (https://na4.salesforce.com/50060000008s5AB - feel free to add comments) and got somewhat of an answer from dev:

     

    We do not currently have a SQL script for this, but it should be possible to create one by selecting max(sampletime) from each QOS table and delete those who are older than e.g. 3 months. Please remember to also clean up the other tables with references to the QOS tables. 

     

    First fire the query to S_QOS_DATA and find out how many RN tables are related to each entry. and delete them in group. For example following query will give you some idea about it. 

    SELECT * 
    FROM S_QOS_DATA A1, RN_QOS_DATA_0014 A2 
    WHERE A1.table_id = A2.table_id AND 
    A2.sampletime < '2010-10-05 14:49:46.000' 

    Basically i am joining S_QOS_DATA and RN_QOS_DATA_0014. I have already seen that S_QOS_DATA using RN table 0014 so joining these tables give me the data on which sampletime should be less than given time. 

    In this way you can figure out the data.

     

    You'd probably write a delete statement to delete that select data.



  • 5.  Re: SQL statement to delete data in bulk

    Posted Jan 14, 2011 02:56 AM

    Chris wrote a lua script that helped me clean the demo system up of stale data, so I wouldn't have to wait for the data_engine to do so. I think this will help many people out and so far, is the best solution I've seen. Here's his post to the general forums:

     

     

    Hi,

     

    as you know, you can define the retention period for qos-values in the data_engine. So after a certain amount of time, the qos-values will be deleted.

     

    But there is one problem that might remain:

    Let's imagine you monitor disk capacity for drive H: on a windows server called "serverA". You would then find qos-values for source=serverA, target=H: in your qos-tables.

    Now, this drive no longer exists and the probe stopped collecting the values, after your retention period is over, those values will be deleted.

    But in the Service Level Manager as well as in the Performance Report Designer / QosChart-Portlet in UMP, you will find a entry for serverA -> H: but without any values. That entry ( in the s_qos_data-table by the way) is a orphan.

     

    You might want to clean up such data on a regular basis, especially when you did a lot of testing with some probes but in the end did not collect all QoS-metrics for production use.

     

    For this purpose, I wrote a small lua-script which you can run in your environment. It is built to be run inside the Nimsoft Alarm Server (nas) and could be scheduled to be run automatically.

     

    All you need to adapt is to modify the header of the lua-script:

    - change iDays to the amount of days after which you want to get rid of qos-definitions. 7 means that 7 days after the last value was received for this Source/Target-combination, the data will be deleted.

    - modify sDatabaseType to mysql if you're running mySql as the sql-queries are slightly different. If you run oracle, let me know and I will adapt the script with the correct statement for oracle.

     

    NOTE: when you run this script the first time, the runtime might be quite long (many hours even). In this case, please do NOT execute the script directly but modify the script header to bHot = 0, the script will then print out SQL-statements which you can execute manually on your sql-commandline of your choice to clean up the database.

     

    It is suggested that you run the script with bHot=0 at least for the first time and test the statements with a qos-type which is not critical for you. Please also make sure you have a backup of your database just in case you end up deleting qos-values you did not want to delete.

     

    have fun!

    chris



  • 6.  Re: SQL statement to delete data in bulk

    Posted Jan 14, 2011 12:40 PM

    Hi Gentlemen,

     

    Thanks for sharing. Maybe there should have been an option somewhere in the cleanup routines built into the product to get rid of orphaned rows.

     

    BTW (a little off topic); Guttorm and I have been working on improving the "update last samplevalue" feature in data_engine. The old one that exist in data_engine (turned off by default) didn't prove to be very effective. We tried to measure it, and it seemed to only be able to handle around 1.200 rows per second, and, whats worse, it was using a list internally to store these values (to be updating s_qos_data), but two threads were using the same list without exclusive loocking. So we experienced data_engine crashing randomly.

     

    In addition, it was doing a begin transaction / commit transaction, which, in worse case scenarios could hold a transaction lock on S_QOS_DATA for up to 3 seconds at a time, blocking everyone else who would query this table without using nolock hint.

     

    Anyway.  We are rewriting the method to instead do bulk insert last sample values into a temp table, then updating a table we call snapshot with those values. This method is very fast compared to the previous one. We have seen it bene able to handle the same throughput as data_engine can for inserting values.

     

    When we have a reliable and fast method of having a lookup table for last sample value for all qos objects, it should be easy to find orphaned objects as well as "current" values for dashboards etc, instead of having to query the sample tables, I think.

     

    Geir.



  • 7.  Re: SQL statement to delete data in bulk

    Posted Jan 19, 2011 06:28 PM

    Hi Geir, that sounds great. Would make things a lot easier once we can use that snapshot-table for latest samplevalues.

    keep us posted!

     

    -chris