DX Infrastructure Manager

Probe of Things - A Custom Probe That Does Things

By BryanKMorrow posted 07-22-2016 10:43 AM

  

The function of this probe will be to provide a UIM administrator with small callback utilities to make the management of the UIM infrastructure a little easier. This probe will be under constant development with new utilities being added regularly. If you have suggestions for new automated utilities or feedback on the current state, please post in the comments of this post.

 

 

PLEASE NOTE: When using the automation_device_wiper callback with the delete_qos option enabled, this will perform a delete on all of the Raw and Summary tables, this could lead to significant load on the database server and possibly some deadlocks.

 

Probe New Features
This probe is a collection of useful callback utilities to help a UIM administrator manage their system. You can accomplish the following tasks currently with this probe:

 

NEW: First attempt at a UIM topology map (hubs with large robot counts won't work OOTB currently, still working on the dynamic sizing).
UPDATED: HTML 5 Report for top probes in use , now supports a top_n parameter to adjust the default of 15

UPDATED: MYSQL connectivity when special characters are used should now be fixed.


Probe Existing Features
• Encrypt passwords for profile usage in almost all current probes
• Update the interface alias for a specified device->interface (Usually only available in USM)
• Reset a probe’s security on a specified robot
• Retrieve a list of source->targets where data has not been received in a specific time frame
• Remove a device or list of devices from discovery with an option to delete QOS

• Delete QOS by providing a list of targets
• Clean niscache of provided list of robot names
• Generate HTML Reports for the following: License Pack Counts, UIM Users, Account Contacts, Hub Subscribers, CDM/Processes/NTServices Thresholds.

      Threshold reports can now take a USM Group Name parameter. NOTE: Needs to be child group that contains                 devices
• Generate HTML Report for UMP database health. Based on TechDoc http://search.ca.com/assets/SiteAssets/TEC1405477_External/UMPUSMSlowPerformanceGuideandTroubleshootingChecklist1.1.pdf
• Manually configure probes by providing the following information: probe name, section, key and value. Can provide a       comma-separated list of robots. Multi-threaded
• Modify single probe configuration by providing JSON
• Collect and ZIP a probe’s log files and configuration files. Could be used for support.
• Collect thresholding information based on configured probe profiles, multi-threaded (This is the migration of the     threshold_archive probe that is currently on the Communities). Multi-threaded

      Can now be limited by providing a hublist parameter. Hub name, not hub address is required.
• Generate HTML SVG report of VMware topology. This features uses the Vsphere API to collect parent-child relationships for your infrastructure. It ‘should’ generate one HTML page for each configured resource for each vmware probe instance. Multi-threaded.
      o NOTE: This connects to vSphere directly so the probe will need to be on the same physical network as the Vcenter          or ESX host. The vmware probe doesn’t collect the needed attributes out of the box, so that is why it connects                  directly to vSphere.
• Retrieve list of current USM Groups (Will be needed for future USM group creation feature)

 

 

 

TODO LIST
• Create USM groups from JSON
• Store data in H2 database for license usage over time
• Improve configuration archive and diff reports (configuration_archive functionality)
• Add more probe threshold reports (logmon)
• Improve probe configuration through JSON to support multiple robots and probes
• Continue testing the VMware topology feature

 

 

 

 

REVISION HISTORY

Date

Version

Change

July 22nd, 2016

1.00

Initial Draft

July 22nd, 2016

1.01

Added devices_with_no_data

July 25th, 2016

1.02

Devices with no data always creates a CSV, no matter the size. Initial merge of healthcheck probe with just list_subscribers creating a CSV.

July 28th, 2016

1.03

Added automation_device_wiper callback

September 9th, 2016

1.04

Removed decrypt password option, added list_accounts/users, can now provide a CSV list of devices for removal

October 4th, 2016

1.05

Added ability to manually configure probe configurations and new licensing report.

October 26th, 2016

1.10

Added niscache_clean, modify probe configurations from json, UMP/database health report, threshold gathering and reporting and vmware topology

November 8th, 2016

1.11

Added support file retrieval and processes/ntservices threshold reports

November 21st, 2016

1.12

Delete QOS by target, inactive probe report and added filtering for threshold gathering and reporting

December 7th, 2016

1.13

Top probe usage reports, configuration_archive custom probe integration

January 3rd, 2017

1.14

mysql connectivity fixed, added uim topology map, top_n parameter to top_probes report

214 comments
9 views

Comments

22 days ago

Hi guys, after changes on community layout I can't see the option to download the probe. How can I download the probe_of_things?

05-03-2019 05:28 AM

For Licensing and Billing I have attempted to set this up however have had trouble, I will try again. The Probe of Things had everything in one probe it was just much easier and everything was in one place.

04-30-2019 09:30 AM

rtirak

I saw in a previous post there was new version out there. But have been unable to find were to download them.

Where did you get your versions?

 

For licensing we use the billing probe and usage_metering probe. The report shows us what probes and packs we have installed in our enviroment.

 

Flores

04-30-2019 08:32 AM

I also noticed when googleing the error I found this:

https://communities.ca.com/thread/241817629-ga-announcement-ca-uim-902

 

And it looks like if you look down through that string service_host is what is trying to interpret that data engine connection string. However it looks like in that post they state that service_host has been depreciated in 8.5/8.5.1 so It sounds like the probe is going to need some re-writing or some kind of work around.

 

 

I do see versions of the probe of things such as 2.00 and 2.07 - However I am not able to see the licensing data in these versions. Not sure how to generate it or if it was moved or what? 

04-30-2019 08:17 AM

Flores

 

I also have been trying the different versions. I see the same exact error you see with Probe of Things Version 1.14

 

One of the biggest things that The probe of things did for us among other things was the licensing breakdown/layout. Being able to just pull the report up and see the different licensing packs and the current usage of them. This seems broken in 1.14 and 1.25

 

 

I also tried:

Probe Of Things Version 1.25

 

When I run the licensing_get_all I get the same error:

 

Apr 30 08:13:31:287 [attach_socket, probe_of_things] User exception in callback for public void com.ca.uim.field.ProbeMain.licensingRunFullCheck(com.nimsoft.nimbus.NimSession) throws java.lang.InstantiationException,java.lang.IllegalAccessException,java.lang.ClassNotFoundException,com.nimsoft.nimbus.NimException,java.sql.SQLException,java.io.IOException: java.lang.IllegalArgumentException: Data Engine connection string fails to contain database provider. 

04-29-2019 12:58 PM

Robert,

I am using:

UIM 9.0.2

Probe of things 1.14

 

Any one know were to get the latest version of this probe?

 

Some of the reports work right out of the box:

Threshold:

CDM, NTservices

Healthcheck:

Inactive Probes

 

When I run the others I get:

Apr 29 08:55:19:660 [attach_socket, probe_of_things] Exception in ThreadClient: java.lang.IllegalArgumentException: Data Engine connection string fails to contain database provider.
Apr 29 08:55:19:660 [attach_socket, probe_of_things] java.lang.IllegalArgumentException: Data Engine connection string fails to contain database provider.

 

Thanks,

Flores

03-21-2019 01:16 PM

Did anyone ever find out anything on this getting supported? Any news or update? Anyone have it working in 9.0.2?

11-16-2018 11:05 AM

Thanks Gene. We need this in the product as its essential. +1

11-16-2018 08:40 AM

Just FYI I have sent an email to the head of the dev team and support management to bring this concern to their attention.

Hopefully, we will get a response before too long!

11-16-2018 08:19 AM

Here you are Gentleman and Idea has been created for this lets vote it up!!!

 

Make Probe of Things Probe as Part of the CA UIM Product 

11-16-2018 08:10 AM

It is my understanding that product management does not follow this blog and so it is necessary for an Idea to be posted if it is to be considered for inclusion into the product.

11-16-2018 07:55 AM

I also agree with Chris Knowles, add it to the product. Keep its development moving forward and support it. We use it often in our administrative tasks. Or please add the source code to somewhere like GitHub so that we as a community can all collaborate to keep it moving forward.

11-15-2018 04:03 PM

Here's a thought... how about add this to the product!  It's one of the few things out there that gives the product admins some useful functionality - most of which should have been there out-of-the-box.  There's a reason why so many people use it...  

11-15-2018 01:06 PM

Well the good news so far is the source code was stored internally so we have access to it.

I am still talking with management on how or what should be done as so many clients do use this.

11-15-2018 12:34 PM

Hi Dan,

 

I am checking wth the UIM team and management to see if any KT was done on this.

My gut feeling is that none was done as this was a personal pet project of Bryan's.

 

I personally have never used this so can not provide much insight currently.

I am not even sure if the source code needed for this probe is avaiable.

 

I will post an update when I know a little more.

11-14-2018 12:11 PM

Hello, I'd like to know if anyone at Broadcom will be picking up this probe and working on it now since Bryan left?

Its essential in our environment and we use it to keep backups of all probe cfg's on all robots.

 

I'm having an issue with the probe where its not pulling the probe cfg.

I'm running the v2.07 and all probes are just getting:

"NOT FOUND"

in the respective Domain\Hub\Robot\PROBE\YYYMMDD-probe.cfg   file when running the 

automation_get_probe_configs probe call back. 

11-01-2018 04:24 AM

Hi,

 

please look for a UIM JRE package equal or newer than 1.6.0 in the local hub archive or on the robot are you trying to deploy.

10-31-2018 12:40 PM

I am getting an error stating dependency problem. Could you please help me fixing it.

10-31-2018 12:33 PM

When I try to deploy this probe, I get an error saying dependency problem. Please let me know how to fix this.

10-29-2018 11:28 AM

Also am I able to use the probe to inventory any probe or just the few the CDM, Processess, ntservices, etc....?

10-25-2018 02:55 PM

Bryan has the documentation for this moved? I have the newest version installed however I am having problems running the threshold reports. Also are the threshold reports still limited to just the CDM, ntservices, processes, vmware, probes? Is it possible to just run like a * on the complete hub and get the metric and threshold report for all robots and all probes on those robots?

 

Also I am trying to link the dashboards in CABI however I do not have the ability to choose my own driver. I only have a drop down and the h2 driver is not listed as a JDBC source type?

09-04-2018 09:57 AM

I don't have the secret key for e2e-appmon configured for encryption because I don't know it. Without the key I can't generate the appropriate password.

 

Sorry.

09-04-2018 06:34 AM

Hi Bryan,

 

for very special security requirements I have to write the excrypted password string direct to the

e2e_appmon probe config file. With your probe-of-things callback 'automation_encrypt_password' I

only get an error 'error' returned.

If I try it with another probe e.g. logmon I get an encrypted password string returned.

Do I do something wrong?

 

Thanks and

Regards

07-17-2018 04:50 AM

Good day

 

Are there any new progress on the probe?

 

Thanks

06-05-2018 10:57 AM

Probe of Things is an exciting idea and I've been trying it out with success.
However, I see that each of the callbacks needs to be run manually.
I was thinking if we can automate this i.e. if we can schedule certain callbacks to run within a specified time interval say each day or once in a week.Also, this should be done through the probe directly rather using a schedule through nas scripts.
This in my view could actually be a nice feature in case of callbacks related to UIM health.
Would like to hear your thoughts on this or any suggestions how we could make this work is most welcome.
My initial attempt is to schedule a script to actually run some callbacks(probe_of_things) related to health and publish the dashboard rather than manually running it.

05-24-2018 09:02 PM

The probe is written in Java and used many different technologies like REST and SQL. 

05-24-2018 11:09 AM

I know it's funny question i'm asking,, how did you implement this, what are technology used here., 

04-03-2018 10:26 AM

In the 2.XX versions I had reworked the reporting piece to be about 1000x faster and use less resources. I'm currently reworking the reporting again, as I'm trying to make it easier for the end user to create probe specific reports. The reports will be CSV instead of HTML 5.

 

There is no updated document, but I will try and create a new one when I'm done with the current report retool.

04-03-2018 08:28 AM

Good day Bryan

 

Just some questions on the probe.

1) What is the difference between 1.14 and 2.06. It seems 2.06 is specifically focused on the getting probe configurations?

2) Have you got a document as to how the reports and dashboards must be setup for version 2.06 or does the probe deployment handle this?

 

Thanks

Jan van Heerden

03-13-2018 04:10 PM

Hello Bryan,

 

I took the test and now it works.

 

The probe created the directories for each HUB / Robot / probe.

 

Thank you

 

03-12-2018 01:36 PM

Thank you Bryan,

I'll test it, and I'll come back with the result.

03-09-2018 06:59 PM

Here is the latest version, you'll need to delete the /db folder after stopping the existing probe before deploying this one.

 

https://s3.us-east-2.amazonaws.com/probe-of-things/Version+2/probe_of_things.zip 

03-02-2018 02:48 PM

Thank you Bryan,

 

I'm using version 1.14.

 

03-02-2018 02:40 PM

I think you have a version with the file path bug, give me until early next week and I'll upload a newer version that resolves this.

 

What version are you using?

03-02-2018 02:34 PM

Bryan,

 

The following error occurred while saving the files.

Do you know what could be wrong?

 

Mar 02 16:23:47:776 [Thread-2, probe_of_things] NimException: = (90) Configuration error, java.io.FileNotFoundException: /opt/nimsoft/backup_conf\BBVAVE_1\BBV_PRO_CM#CCPRLAP30\logmon\201802080559--logmon.cfg (No such file or directory): /opt/nimsoft/backup_conf\BBVAVE_1\BBV_PRO_CM#CCPRLAP30\logmon\201802080559--logmon.cfg (No such file or directory)
Mar 02 16:23:47:776 [Thread-2, probe_of_things] Finished.

 

 

The files are being saved in "/ opt / nimsoft" and not in "/ opt / nimsoft / backup_conf"

cd

 

03-02-2018 01:32 PM

Thank you Bryan,

 

Can you tell us when you are going the next version of probe_of_things?

03-02-2018 12:29 PM

Yes, just change it to a unix style path:  /opt/nimsoft/archive or something similar

03-02-2018 09:44 AM

Good morning ,

 

I made the deploy of probe_of_things in a linux robot, I was in doubt as to the "archive_path" parameter, do I need to change?

 

Ex:  "/opt

 

Because the default value is "D:\archive_path".

 

 

 

Thank you.

02-13-2018 10:23 AM

I’ve never seen this before, but you could always deactivate and deactivate the probe to see if that stops the issue. 

02-13-2018 10:22 AM

It has always collected all the configurations and then done a diff against the previous. I never got around to adding the pruning, I will try and take a look at this in the next few weeks. 


02-13-2018 08:54 AM

I see that each time I run the "automation_get_probe_configs" command it saves all .cfg files, even though nothing has changed since the last run. I remember this used to be different, it'd only save a .cfg when something changed. I think this is contributing to the growing database as well.

02-12-2018 07:47 AM

Hi Bryan, I have some questions about the "automation_get_probe_configs" option.

 

In the config file there is a "archive_daily" and "archive_interval_days" setting. What is difference between these two? Also, since I wanted more control over when the config backup started, I set "archive_daily" to no, and schedule the command via NAS.

 

I am running v1.25 on 5 seperate UIM domains/installations. Normally the H2 database is about 200MB. On 3 domains, after a couple days the H2 database suddenly started growing really fast. One was 16GB, one 18GB and another 24GB. Normally the probe logs a bunch of stuff like:

 

Feb 03 07:38:51:669 [attach_socket, probe_of_things] Callback rebuild_configuration_database starting...
Feb 03 07:38:51:669 [Thread-6, probe_of_things] Using thread pool size of:25. This can be configured in probe.cfg if desired.
Feb 03 07:38:51:693 [Thread-6, probe_of_things] Getting last previously stored configurations
Feb 03 07:38:56:633 [Thread-6, probe_of_things] previousConfigs.size = 2833
Feb 03 07:38:56:747 [Thread-6, probe_of_things] Building hub list, running gethubs
Feb 03 07:38:56:862 [Thread-6, probe_of_things] hubDomain: uim-decent-02_domain domain: uim-decent-02_domain
Feb 03 07:38:56:862 [Thread-6, probe_of_things] Hub ts-customer_hub matches <setup> filter, adding to list to run
Feb 03 07:38:56:862 [Thread-6, probe_of_things] hubDomain: uim-decent-02_domain domain: uim-decent-02_domain
Feb 03 07:38:56:862 [Thread-6, probe_of_things] Hub ts-company_hub matches <setup> filter, adding to list to run
Feb 03 07:38:56:862 [Thread-6, probe_of_things] hubDomain: uim-decent-02_domain domain: uim-decent-02_domain
Feb 03 07:38:56:863 [Thread-6, probe_of_things] Hub uim-decent-02_hub matches <setup> filter, adding to list to run

 

 

But after a few days the logging is few lines per day. And at this time, the H2 database started growing really fast.

 

Feb 03 07:50:23:952 [attach_socket, probe_of_things] Callback rebuild_configuration_database starting...

Feb 03 07:50:23:953 [Thread-2, probe_of_things] Using thread pool size of:25. This can be configured in probe.cfg if desired.

Feb 03 07:50:24:020 [Thread-2, probe_of_things] Getting last previously stored configurations

Feb 04 07:50:23:598 [attach_socket, probe_of_things] Callback rebuild_configuration_database starting...

Feb 04 07:50:23:601 [Thread-3, probe_of_things] Using thread pool size of:25. This can be configured in probe.cfg if desired.

Feb 05 07:50:23:644 [attach_socket, probe_of_things] Callback rebuild_configuration_database starting...

Feb 05 07:50:23:647 [Thread-4, probe_of_things] Using thread pool size of:25. This can be configured in probe.cfg if desired.

 

Have you seen this problem before? The database suddenly becoming this big isn't desirable.

02-06-2018 06:30 AM

Hi Bryan, dont know why but i am getting communication error while running "health check get device no data". Can you please help here.

 

Thanks

Ankur

01-23-2018 02:09 PM

First thing I would try is to deactivate and then reactivate the probe, if that doesn't work than I would deactivate the probe, delete the /db folder in the probe installation directory and then reactivate it.

01-23-2018 01:35 PM

Hello Bryan,

 

I am running the "automation_get_probe_configs", but it is not working, it displays the following error in the log.

 

Jan 23 15:56:59:671 [pool-1-thread-95, probe_of_things] Error executing prepared statement: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-192]


Do you know how we can fix it?

 

Tks,

01-23-2018 11:29 AM

This version of the probe_of_things can backup the configurations without the database access. You could just be able to run the "automate_get_configs" or whatever the callback is without access to the UIM database.

01-23-2018 11:26 AM

Bryan,

I want to backup all probe configs, so I've been looking to the probe_archive and the probe_of_things for this. Can I also use the backup function without a database-connection (internal or external)? If I could simply get the raw config-files as plain .cfg files on my harddisk, I'd be happy.

10-20-2017 11:20 AM

Quick Update. I've been working on an improved Probe Configuration Archive version that provides some additional database storage options (MSSQL, Oracle, MYSQL) and also JSON document storage in Amazon DynamoDB.  This will remove the H2 embedded database from the probe. What I am looking for is people with MYSQL and Oracle databases that would be willing to test the connectivity. It works in my small lab, but I need to test in larger environments.

 

If you are interested in testing this, please contact me at bryan.morrow@ca.com.

 

Thanks,

 

Bryan

10-19-2017 09:31 AM

Hi Bryan,

 

I use the Probe-of-things very often. Great Job. Thanks.

Recently, I got an error message in logfile when I use callback 'automation_device_wiper':

* MCS found, checking if device is a poller or model device, if so skipping.

* Invalid column name 'model_device'

* User exception in callback for public void com.ca.uim.field.ProbeMain.automationWipeDevice(com.nimsoft.nimbus.NimSession,java.lang.String,java.lang.String,java.lang.String,java.lang.String) throws com.nimsoft.nimbus.NimException,java.lang.InstantiationException,java.lang.IllegalAccessException,java.lang.ClassNotFoundException,java.sql.SQLException: java.lang.NullPointerException

 

I don't know since when this issue occured. Maybe since I updated UIM.

 

Regards,

10-03-2017 09:28 AM

I will look into when I next work on the probe.

10-02-2017 08:21 PM

Hi,

 

I would like to know can your tool can add the temp table to store incoming cfg file do a comparison first.

Back it up if there is a change and then do the data entry to the tables.

If four or five cfg exist delete the oldest file.

Remove the old data entry associated with the old configuration files.

 

I tried to introduce the Configuration_Archive probe in my env.

I was given  a question from my college on disk resources it takes up if it 1 min to trigger with 10000 cfg which I can't reply .

I would like to know if the probe can have above feature to lower the disk resources.

09-29-2017 05:11 PM

I created two dashboards for the probe Bryan created. Take a look at the link below.

Dashboard for Probe of Things - A Custom Probe That Does Things

09-28-2017 04:08 PM

Creating database index help a lot to improve the performance.

09-27-2017 02:36 PM

Hi Bryan, I forgot to say that looking at all the views and comments of this topic, this is probably one of the most popular topics. So maybe you can make a new topic so we can all up vote this to be a part of UIM?

Regards.

David

09-27-2017 02:18 PM

Hi Bryan, thanks for the great work. This is surely the probe we miss the most currently in the production UIM version. My team has to account for any changes in thresholds and deliver live reports about what we are monitoring, due to audits. It's a multi-tenant installation and currently we are using the marketplace archive probe for the changes. But we haven't got a nice report what we monitor for each customer. I love the configuration change alarms that I saw in your probe and hopefully you get some time to get to your "to do" list as well. I think the functionality of this probe should be in the production UIM, with current probe configurations and configuration changes dashboards/reports.

Best Regards,

David

09-26-2017 04:37 PM

I will try to get it updated to include those two reports and fix that column entry. It's also possible if you are using version 2 that you needed to delete the database in order for the schema to be changed from the 1.x version.

 

Bryan

09-25-2017 05:53 AM

Hi Bryan,

 

Are you still working on the 2.00 version? Somehow this is the only version that seems to work in my environment. But at the moment I'm missing the ntservices and logmon/ntevent thresholds. And I cannot make the Hublist to long otherwise I'll the error below.

 

Sep 22 16:08:54:073 [pool-6-thread-10, probe_of_things] Value too long for column "DISK VARCHAR(50) SELECTIVITY 12": "'#u01#tmp.mondo.4526#tmp.mondo.23310#mountpoint.1809' (51)"; SQL statement:
INSERT INTO threshold_current_cdm_disk (hub, robot, disk, missing_alarm, fixed_error_active, fixed_error_threshold, fixed_warning_active, fixed_warning_threshold, delta_error_active, delta_error_threshold, delta_warning_active, delta_warning_threshold, inode_error_active, inode_error_threshold, inode_warning_active, inode_warning_threshold) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) [22001-192]

09-19-2017 10:13 PM

I will look into it. 

09-19-2017 07:57 PM

Is there any intention of adding Oracle support in the future?

09-19-2017 06:34 PM

Unfortunately, the current state of the probe does not support Oracle. 

09-19-2017 05:59 PM

Is this probe compatible with Oracle?  I'm getting a lot of errors for some of the callbacks.  

 

Example:

Sep 19 17:49:39:996 [attach_socket, probe_of_things] ERROR: ORA-01747: invalid user.table.column, table.column, or column specification

Sep 19 17:49:40:002 [attach_socket, probe_of_things] Starting table size queries
Sep 19 17:49:40:038 [attach_socket, probe_of_things] ERROR: ORA-00923: FROM keyword not found where expected

Sep 19 17:49:40:043 [attach_socket, probe_of_things] Starting poor performing queries
Sep 19 17:49:40:077 [attach_socket, probe_of_things] ERROR: ORA-00923: FROM keyword not found where expected

Sep 19 17:49:40:082 [attach_socket, probe_of_things] Starting DB I/O activity queries
Sep 19 17:49:40:118 [attach_socket, probe_of_things] ERROR: ORA-00906: missing left parenthesis

Sep 19 17:49:40:123 [attach_socket, probe_of_things] Starting database maintenance queries
Sep 19 17:49:40:157 [attach_socket, probe_of_things] ERROR: ORA-00972: identifier is too long

Sep 19 17:49:40:161 [attach_socket, probe_of_things] Starting database NAS queries
Sep 19 17:49:40:199 [attach_socket, probe_of_things] ERROR: ORA-00923: FROM keyword not found where expected

06-14-2017 03:40 PM

Nice!!!

05-29-2017 11:49 AM

Hello Bryan,

You do a very good job, thank you!

I have a problem with the vmware report: for a vcenter, only informations about the first datacenter are displayed.
Can you add the management of x datacenters?

Thank you,
Pascal

05-19-2017 07:21 PM

Bryon,

 

Very nice and useful Probe that we should had included as part of the product. 

 

Is it possible to save the data onto a remote MS SQL instead of the H2 DB? This way the data can be properly managed and backup.

 

For the Probe configuration file (from the automation_get_probe_config) callback, it would be nice if we can config a parameter to say I only want to keep X number copies of the config and/or keep only files that are less than X number days old etc...

05-18-2017 11:51 AM

I have created a beta version of the report, this can be found in version 1.25 at the following location http://192.99.166.70/nimsoft/probes/probe_of_things.zip.

 

I've only tested this in my small environment and will stay its accuracy is probably about 90%, as it tracks the usage over the last 30 days.

 

Clecimar danilo1-c - Could you please test this out and let me know how the report looks?

 

Here is an example:

 

05-17-2017 04:36 PM

As for the update on the licensing by origin, I had to rethink my logic and have got the callback working. I'll now try to add an HTML report in the morning.

05-17-2017 12:52 PM

I'll look at adding this to my next version, no idea on a release date though.  You could script the callbacks with LUA if you had a large number to do, that would be a fairly easy workaround.

05-17-2017 10:00 AM

Bryan

I was using the simple robot name not the fully qualified. Any chance of it being able to handle multiple robots and/or probes in the future? It took me over an hour and a half of constant mouse clicking to validate over 150 systems....

Thanks.

05-16-2017 03:17 PM

It takes a single robot address and the probe name as the arguments.

 

/domain/hub/robot

spooler

05-16-2017 03:02 PM

Bryan,

I'm using 1.14 and tried to run the automation_reset_probe_security commandset. Initially I tried it against 140 or so servers and their probes (6). I ended working down to 1 system with 1 probe and all attempts failed. I entered the simple robot's name or its ip address. Again all failed. What am I doing wrong?

Thanks

05-16-2017 10:17 AM

The latest 'official' release is 1.20, you can find it here: http://192.99.166.70/nimsoft/probes/probe_of_things.zip 

 

There is also a 2.00 build I'm working on that only has the threshold portions working currently. This newer version will support an 'agent_of_things' so that tasks can be run on remote hubs easier and quicker. Also, the threshold report generation is about 100x faster in this version.http://192.99.166.70/nimsoft/probes/probe_of_things2.00.zip 

05-16-2017 05:27 AM

Dear Bryan,

 

Could you please share the recommended version of probe_of_things for UIM8.4 SP2.

 

I am using version 1.13 and getting communication error in between while running the threshold_get_config.

 

Thanks,

IK

05-11-2017 11:13 AM

I might break them into lists of 25-50, the callback window probably can’t take more than 1000 characters.

 

Bryan

05-11-2017 11:11 AM

I have 1000 + robots ,trying to do the all at same time 

05-11-2017 11:09 AM

I finished the queries, I'll try to edit the reports ASAP. Might not get this out until tomorrow.

05-11-2017 11:05 AM

You shouldn't have to update any sort of text file, you just need to pass a list of robot names into the callback.

 

device_list: robot1,robot2,robot3

05-11-2017 11:00 AM

Hi Bryan,

 

 For cleaning niscache should i need to update the robot name in .txt and should i run the call back?

05-11-2017 08:43 AM

Great question!

 

 

 

I need this kind of feature too!

 

 

 

A report that says the license pack by customer/origin. The billing probe I believe need more development or documentation.

 

 

 

Clecimar

 

 

 

De: danilo1-c 

Enviada em: quarta-feira, 10 de maio de 2017 18:17

Para: Clecimar Fernandes

Assunto: Re:  - Probe of Things - A Custom Probe That Does Things

 

 

 

 

 

 

<https://communities.ca.com/?et=blogs.comment.created> CA Communities

 

 

 

 

 

 

Probe of Things - A Custom Probe That Does Things

 

 

new comment by danilo1-c <https://communities.ca.com/people/danilo1-c?et=blogs.comment.created>  - View all comments on this blog post <https://communities.ca.com/community/ca-infrastructure-management/unified-infrastructure-management/blog/2016/07/22/probe-of-things-a-custom-probe-that-does-things?commentID=233955641&et=blogs.comment.created#comment-233955641>

05-10-2017 05:21 PM

I will add this to my agenda tomorrow, hopefully I will find time to add it. 

05-10-2017 05:16 PM

Im talking about the "Probe Packs" and/or "Probe Pack Details" reports to be more specific. Because each customer consumes a determined number of Probe Packs, and it would be great to see it separated by Customer (Origin). Almost like the billing probe does, but better.

05-10-2017 05:13 PM

Are you talking about a specific report or all reports? I have some development versions that allow more filters on the threshold reports. 

05-10-2017 05:08 PM

I made it work properly Bryan, thank you very much for ur fast answer!

 

I got another question, I have a shared environment with different customers being monitored through UIM, each of them have their proper Origin set.

 

Is there a way I can extract the reports with Origin included?

 

Thanks again!

05-09-2017 11:25 AM

The 'success' message is stating the callback is being executed in a multi-threaded situation, you'll probably need to have the log level set at 4 to see what the error might be. One thing in this version of the probe is to be aware for the VMware mapping to work, the probe machine needs to have direct access to the Vsphere system. So if the probe is running in your datacenter and the vmware probe is at a customer site through a tunnel, you won't get anything.

05-09-2017 11:20 AM

Hello Bryan! 

 

Thank you very much for your efforts in helping the community with this new tool you designed.

 

I got particularly interested on the vmware reading part, however I cannot make it work.

 

I use the vmware function and it returns success:

 

 

But the report comes empty for me. Is there anything else im missing to do here?

 

Thanks again!

04-13-2017 08:35 AM

Most likely the command is still running, its just the Probe Utility window timed out. I'm actually going through a rewrite of the probe and making some features perform better. This is one aspect that I will update as well.

04-13-2017 08:34 AM

The only true prerequisite is if you are going to run callbacks that contact the database, the probe will need to be on a robot that can communicate and successfully authenticate to the database (SQL or Windows login).  If you are using integrated authentication you will probably need to run the robot as the appropriate Windows user.  The H2 database is automatically started when the probe starts, so no configuration is needed of that piece.

04-13-2017 08:30 AM

Hi Bryan,

 

What are the mandatory items to do before using this probe?

 

I am using MS SQL 2012. Do i need to configure H2 Database ?

 

Thanks & Regards,

Imran Khan

04-12-2017 12:28 PM

Hi Bryan. At first I´d like to say the probe is awesome. Congratulations.

 

I tried to run a command and get the communication error message:

Do you have a tip do solve this ?

 

Thank you.

 

Clecimar

03-02-2017 07:15 AM

Sounds good, that was my backup plan. Just wanted to check with you. As always You Da MAN!! This probe has been very helpful and your help has been even more appreciated!

02-28-2017 05:53 PM

This has been asked of me before. I had planned on redoing some of the reports when I have some free time, so this is something that is on my radar. I will have to rethink my template when it comes to being able to move them easily. For now, a folder copy would be your best bet. 

02-28-2017 02:35 PM

Bryan is it possible to send the reports somewhere else rather than the reports folder in the probe directory? For Instance if I wanted the reports folder to live out in a file share somewhere?

02-27-2017 10:05 AM

That is correct, it lists all S_QOS_DATA entries that don't have any data newer than the number of days provided.

02-27-2017 09:37 AM

Hi Bryan,

 

I just verified and found that it was because of the timeout. Upon increasing the value it gave me the desired output.

 

Also, when I tried to pull a report of devices with no data for the last 10 days, it gave me a 65 MB data file. And I can see data from 2014 when Nimsoft was installed. Is that how the functionality of this call back is supposed to be? 

 

-Ananda

02-20-2017 10:52 AM

Are you sure the callback isn't timing out and the reports are still running? Watching the log file should tell you more.

02-20-2017 09:54 AM

Hi Bryan,

 

When I tried to run cdm and other probe reports, it fails saying communication error and displays the details of just a couple of servers. Any suggestion?

 

 

-Ananda

02-15-2017 10:59 AM

Can you set the loglevel to 4 and run the callbacks you are trying to populate and then email the logs to me at bryan.morrow@ca.com ?

 

Thanks,

 

Bryan

02-15-2017 10:54 AM

Thanks ,but i am following the same procedure ,but still i am not getting any data in reports ,i could see the logs are fine ..

02-15-2017 09:14 AM

Issac I have been using this probe for a while now, when you say you can't see any values in the reports it sounds like your assuming the probe is automated. Unless you have configured it that way it does not come out of the box setup to automatically run. If you are using the IM you can use CTRL+P while the probe is highlighted and that will bring up the probe utility. From their in the probe commandset drop down you can chose which reports you want to run and hit the green  run button to run them. Depending on the size of your environment you may want to hit the options button and bump the request timeout up to 200 or 300 seconds. Than once you see that the different reports have successfully run in the probe utility than you can go and look at the reports in the probe folder.

 

OR

 

If you are using the Admin Console log in and find the robot and the probe of things on the robot. Then bring up the menu on the probe of things and choose "View Probe Utility in New Window". From their you can go through find the report or command you want to run and then choose the Green Run button. Also in the Action drop down menu you can change the timeout value if need be.

 

Hopefully this helps.

02-14-2017 10:21 PM

Bryan,

 

I have deployed this probe in my primary UIM(8.5) but in the reports I am not seeing any values, all are empty.

02-08-2017 10:23 AM

Thanks!

02-08-2017 10:22 AM

I will take a look at this on the next update, I hadn't thought about the service driving the authentication in the previous version.

02-08-2017 10:11 AM

Hi Bryan

 

I get this error:

Feb 08 15:04:16:739 [attach_socket, probe_of_things] Exception in ThreadClient: com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication.
Feb 08 15:04:16:740 [attach_socket, probe_of_things] com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication.
 at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2338)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:1929)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:1917)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4026)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1416)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1061)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:833)
 at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:716)
 at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:841)
 at java.sql.DriverManager.getConnection(Unknown Source)
 at com.ca.uim.field.db.DAO.GetDB(DAO.java:216)
 at com.ca.uim.field.db.DAO.getAccountList(DAO.java:789)
 at com.ca.uim.field.ProbeMain.licensingRunFullCheck(ProbeMain.java:478)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at com.nimsoft.nimbus.NimServerSession$ThreadClient.run(NimServerSession.java:192)

 

NimbusWatcherService runs as a service user which has dbowner access to the DB (on a remote SQL cluster).

Any correction possible?

 

Thanks

Leandro

02-08-2017 09:58 AM

private static final String MSSQL_NETWORK_ADVANCED_PACK_QUERY = "select COUNT(distinct one.source) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" where one.probe in ('pollagent', 'cisco_monitor', 'cisco_nxos', 'cisco_qos', 'interface_traffic', 'snmpcollector', 'snmpget', 'saa_monitor', 'snmptoolkit') "
               +" and two.sampletime >= DATEADD(DAY, -30, GETDATE())";
     private static final String MSSQL_NETWORK_ADVANCED_DETAIL = "select distinct three.hub, one.robot, one.source from S_QOS_DATA one "
               +" left join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" left join CM_NIMBUS_ROBOT three on three.robot=one.robot "
               +" WHERE one.probe in ('pollagent', 'cisco_monitor', 'cisco_nxos', 'cisco_qos', 'interface_traffic', 'snmpcollector', 'snmpget', 'saa_monitor', 'snmptoolkit') "
               +" AND two.sampletime >= DATEADD(DAY, -30, GETDATE()) "
               +" ORDER BY three.hub";
     private static final String MYSQL_NETWORK_ADVANCED_PACK_QUERY = "select COUNT(distinct one.source) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" where one.probe in ('pollagent', 'cisco_monitor', 'cisco_nxos', 'cisco_qos', 'interface_traffic', 'snmpcollector', 'snmpget', 'saa_monitor', 'snmptoolkit') "
               +" and two.sampletime between (CURDATE() - INTERVAL 1 MONTH ) and CURDATE()";
     private static final String MYSQL_NETWORK_ADVANCED_DETAIL = "select distinct three.hub, one.robot, one.source from S_QOS_DATA one "
               +" left join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" left join CM_NIMBUS_ROBOT three on three.robot=one.robot "
               +" WHERE one.probe in ('pollagent', 'cisco_monitor', 'cisco_nxos', 'cisco_qos', 'interface_traffic', 'snmpcollector', 'snmpget', 'saa_monitor', 'snmptoolkit') "
               +" AND two.sampletime >= (CURDATE() - INTERVAL 1 MONTH) "
               +" ORDER BY three.hub";

02-08-2017 08:48 AM

Bryan could you also provide me with the statements you use to get the advanced network pack number? For licensing I have to be able to compare lists of devices that are in the advanced network pack inventory and the ping pack. The last statements you gave me helped a lot for the ping packs.

02-06-2017 10:48 AM

private static final String MSSQL_PING_PACK_QUERY = "select COUNT(distinct REPLACE(one.target, ':ping', '')) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" where one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT')  and one.target like '%ping%' and two.sampletime >= DATEADD(DAY, -30, GETDATE())";
     
private static final String MSSQL_PING_PACK_DETAIL = "select distinct three.hub, one.robot, REPLACE(one.target, ':ping', '') as target "
               +" from S_QOS_DATA one "
               +" left join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" left join CM_NIMBUS_ROBOT three on three.robot=one.robot "
               +" where one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT')  and one.target like '%ping%' and two.sampletime >= DATEADD(DAY, -30, GETDATE()) ORDER BY three.hub";
     
private static final String MYSQL_PING_PACK_QUERY = "select COUNT(distinct REPLACE(one.target, ':ping', '')) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" where one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT')  and one.target like '%ping%' and two.sampletime between (CURDATE() - INTERVAL 1 MONTH ) and CURDATE()";
     
private static final String MYSQL_PING_PACK_DETAIL = "select distinct three.hub, one.robot, REPLACE(one.target, ':ping', '') as target "
               +" from S_QOS_DATA one "
               +" left join S_QOS_SNAPSHOT two on two.table_id=one.table_id "
               +" left join CM_NIMBUS_ROBOT three on three.robot=one.robot "
               +" where one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT')  and one.target like '%ping%' and two.sampletime >= (CURDATE() - INTERVAL 1 MONTH) ORDER BY three.hub";

02-06-2017 10:27 AM

Bryan what is the easiest way for me to see the query that is run to come up with the "Ping Packs" licensing in the licensing dashboard? I would like to see the exact query it runs to get that number?

02-03-2017 01:34 PM

I also found when removing the DATE ADD like so:

 

select COUNT(distinct REPLACE(one.target, ':ping', '')) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id
 where one.robot not in (select robot from Cm_NIMBUS_ROBOT where robot_id in (select distinct(robot_id) as robot_id from CM_NIMBUS_PROBE WHERE probe_name not in ('adserver', 'adevl', 'cloudstack', 'exchange_monitor', 'ica_server', 'lync_monitor',
'notes_server', 'ocs_monitor', 'pvs', 'sharepoint','websphere_mq','xendesktop', 'xenapp','ad_response', 'email_response', 'ews_resposne', 'ica_response', 'jdbc_response', 'notes_response',
'sql_response', 'aws', 'azure', 'google_app_engine', 'google_apps', 'rackspace', 'salesforce','openstack', 'cloudstack', 'vcloud', 'docker_monitor', 'apache', 'easerver', 'iis', 'jboss',
'jdbc_response', 'jmx', 'jvm_monitor', 'tomcat', 'weblogic', 'websphere','db2', 'informix', 'mysql', 'oracle', 'oracle_logmon', 'sqlserver', 'sybase', 'sybase_rs', 'cisco_ucm', 'cisco_unity')
AND last_action >= DATEADD(DAY, -30, GETDATE())))
AND one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT') and one.target like '%ping%'

 

I receive a count of 5. Not sure why tho.

02-03-2017 01:18 PM

I'm wondering if Count 0 that I am getting is actually the correct result. Kind of hard to verify tho.

02-03-2017 11:05 AM

Right, so when I look at it and follow the logic it does make ("most") sense that it should work. When I use the SLM and try to run this the query does run and complete. However I get a result of "Count 0"

02-03-2017 10:37 AM

This query *might* work.

 

select COUNT(distinct REPLACE(one.target, ':ping', '')) as count from S_QOS_DATA one join S_QOS_SNAPSHOT two on two.table_id=one.table_id
where one.robot not in (select robot from Cm_NIMBUS_ROBOT where robot_id in (select distinct(robot_id) as robot_id from CM_NIMBUS_PROBE WHERE probe_name not in ('adserver', 'adevl', 'cloudstack', 'exchange_monitor', 'ica_server', 'lync_monitor',
'notes_server', 'ocs_monitor', 'pvs', 'sharepoint','websphere_mq','xendesktop', 'xenapp','ad_response', 'email_response', 'ews_resposne', 'ica_response', 'jdbc_response', 'notes_response',
'sql_response', 'aws', 'azure', 'google_app_engine', 'google_apps', 'rackspace', 'salesforce','openstack', 'cloudstack', 'vcloud', 'docker_monitor', 'apache', 'easerver', 'iis', 'jboss',
'jdbc_response', 'jmx', 'jvm_monitor', 'tomcat', 'weblogic', 'websphere','db2', 'informix', 'mysql', 'oracle', 'oracle_logmon', 'sqlserver', 'sybase', 'sybase_rs', 'cisco_ucm', 'cisco_unity')
AND last_action >= DATEADD(DAY, -30, GETDATE())))
AND one.probe in ('net_connect', 'icmp') and one.qos in ('QOS_NET_CONNECT') and one.target like '%ping%' and two.sampletime >= DATEADD(DAY, -30, GETDATE())

02-03-2017 10:24 AM

Hi Bryan,

 

Question for you we are going through a license "true up" and it looks like we are running into some strange cases where server and ping packs can have multiple probes and licenses in the same packs. The server pack does not include the pink pack. BUT the server pack does include 2 probes that are in the ping pack that are used.

 

Is their any easy way to use your probe and come up with which servers are being pinged by a ping pack that are not covered under a server pack? I know this sounds confusing any help would be appreciated.

 

Thanks,

Bob

02-02-2017 11:52 AM

Just updated the documentation. I removed the configuration_archive.pdf as it had bad information in regards to the database connectivity. The documentation should now include the configuration file options and the database connectivity. At some point I'll put together a use case document to help people understand what all the callbacks can be used for.

 

Thanks!

 

Bryan

01-30-2017 06:45 AM

Hi Yu,

I have developed the below query but its has given incorrect cpu utilization total %. could you please validate the below query and let me know is there any changes require.

select distinct (a.source) as Server,b.samplevalue,b.sampletime
from s_qos_data a, RN_QOS_DATA_0003 b
where probe ='CDM'and qos='QOS_CPU_USAGE' and a.table_id=b.table_id and target='total' order by b.samplevalue DESC

 

Cheers

Alagiri

01-30-2017 06:21 AM

Hello, Alagiri.

The RN_QOS_DATA_XXXX table has your data. Please use "table_id" for filtering.

 

Regards,

Yu Ishitani

01-30-2017 04:57 AM

Hi Yu,

Thanks for your swift response. I have tried the above options but i can't see the the table name starting with R_table in nimsoft sql database.However i can see RN_QOS_DATA_*** tanles..My requirement is business need last 3 months of raw historical data(CPU,MEM and Disk utilization) for all the servers which is configured in UMP. is there any way to export the historical data from UMP database?. or any special query which we can run against the database get the historical data.

 

Cheers

Yu Ishitan

01-29-2017 08:36 PM

Hello, Alagiri.

In UIM, performance data can be tracked down in the below way.

 

1) You can find Performance Data entity in S_QOS_DATA table. An entry exists for one entity of Performance data.

2) In the table, pull 2 columns "table_id" and "r_table" from S_QOS_DATA table.

3) The raw Performance data is stored in "r_table". Please filter the value using "table_id" for the table.

 

We have a portlet called "SLM" which can do similar tracking Performance data down.

SLM Data Management - CA Unified Infrastructure Management - 8.4 - CA Technologies Documentation 

SLM Interface Reference - CA Unified Infrastructure Management - 8.4 - CA Technologies Documentation 

View and Export Quality of Service (QoS) Data - CA Unified Infrastructure Management - 8.4 - CA Technologies Documentati… 

 

Please let me know if you have questions.

Regards,

Yu Ishitani

01-27-2017 12:24 PM

Hi Bryan,

 

I'm new to CA UMP tool. Could you please assist me how to get the historical raw data like(CPU,Memory and disk

utilization) in UMP tool. Is there any sql query which can provide the those information.If yes, Would it be possible to share me that query?.- thanks in advance

 

Cheers

Alagiri

01-26-2017 12:20 PM

Hi Bryan,

 

  I need to get all software's which is installed in a server ,by anyway can we list down .

 

Regards,

01-24-2017 01:49 PM

Large environments are a problem in that first release, I need to add multi-threading so it doesn't require such a long timeout value. Also, I need to figure out the dynamic sizing of the map, currently you'll probably have to edit the X and Y values to make the topology fit if you have multiple hundreds of robots on a single hub.

01-24-2017 01:47 PM

If you use the thresholding feature instead of the probe configuration the tables are already formatted by section->key->value. You can configure the probes to collect thresholds in the raw configure->profiles section.

 

 

This allows you to filter out the things you don't need. Once you have the profiles configured you can run the threshold_get_configs callback. Once this is done you'll see there are three sample reports that can be accessed (threshold_cdm_report, threshold_ntservices_report, threshold_processes_report).

 

Or you can query the threshold_current table directly like you are the other table.

01-24-2017 01:20 PM

Ok thanks for your help. I cant want to see the next release. I have noticed 300 seconds as far as the UIM Topology is not long enough and will time out. So I will only get about half of the Hubs mapped etc. I am just a little nervous to let that run longer then 300 seconds. We do have a large environment though.

01-24-2017 01:14 PM

Hello Bryan,

I am building an automation tool which exports all monitoring configurations from all probes into a nicely formatted set of db tables. I see that a PCA_CFG table is built to store probe configuration files. When I run a SELECT on cfg_file for many of the probes (e.g. CDM), it returns no results for many of the hosts despite CDM probe being on them. Particularly, PCA_CFG appears to show CDM config files for Windows hosts, but not for Linux. I also noticed that I could not get any results on many other probes. Is there a trick to retrieving them? Thanks

01-24-2017 10:16 AM

The top probes is just the amount of deployments, not the actual usage for remote probes. You can run the licensing_run_all callback and it should generate the a report of your actual probe pack usage. SNMPC and cisco_monitor fall under the 'Network Advanced' pack.

01-24-2017 08:07 AM

Bryan,

Is it possible to get an idea of the number of network devices the cisco_monitor probe is on with this, or possibly how many unique devices the snmp_collector is being used with? The tool has given me great insight I am just wondering if I am missing something as I don't see the cisco_monitor probe on my Top Probes list, and I don't see any info on the snmp_collector.

01-23-2017 02:46 PM

Ah ok. That would explain it not being on my list then. 

01-23-2017 02:43 PM

If I remember correctly I ignored specific probes in the query being used, to avoid that.

01-23-2017 02:41 PM

So wouldn't or shouldn't the hdb probe be one of my "Top Probes" as its on just about everything if its being included?

01-23-2017 02:28 PM

These are system probes that usually don't have much or any configuration and they are only ignored for the configuration archive portion, not the top probes, etc.

01-23-2017 02:23 PM

Bryan,

I also noticed in the Raw Config that the following probes are being ignored:

   " spooler,hdb,snmptd,qos_engine,qos_processor,mpse,ppm,alarm_enrichment,remote_control,pollagent,nsdgtw"

 

Is this because of probe compatibility or usability at this point, or can we remove some of theses so this data is included when we run the top probes and or the probe pack detail requests?

01-23-2017 01:47 PM

Bryan,

Thanks for the information. I'm going to give this a try and see how this goes. Thanks again for your help and great work on this probe!

01-23-2017 01:30 PM

Negative, the option you want to change is in the Probe Utility window. There should be a sprocket or similar icon that are the options for the Probe Utility window.

01-23-2017 01:27 PM

Ok and just to be clear the location I am speaking of is in the IM. It is under Options, Tools, "Probe Request Timeout Value: 10 Sec" I'm going to set this to 300 S  sec. Does this make the change for all probes across the environment?

01-23-2017 01:23 PM

I usually set it to 300 seconds, just to make sure I give everything enough time.

01-23-2017 01:16 PM

It looks like the Default is set to 10 seconds does this seem correct? What would you suggest I move it to?

01-23-2017 12:56 PM

If you are using IM or AC2 you can try and increase the probe utility request timeouts and see if that helps. 

01-23-2017 12:32 PM

Bryan, This Probe is turning out to be very helpful. I am having a bit of an issue with an Error I get when I run a few of the commands. For example when I run the licensing_run_all I get the following error:

 

"ServerErrorException - Internal Server Error: (2) communication error, I/O error on nim session................." "Read Timed Out"

 

Have you or anyone experienced this before? It seems after a specific amount of time the session between the probe and the MSSQL DB gets killed. I a getting results though so I am just wondering if I am not getting all the results just whatever is being able to be read in whatever the amount of time the connection stays connected.

12-30-2016 06:11 PM

Hi Bryan, 

       I make another test, o other server and I capture the packets, the only diference I see was on the encoding e and I didn't know how to calculate hash of the password to validate the data,

 

Packets sent/received by probe of things:

Packets sent/received by probe mysql:

*password hash removed for sanity reasons

 

As you can see the probe gather make the request to the server correctly, the only doubt I have is about the password  because I doesn't know how to validate the hash. And I would like to ask if you tested on your envoriment a password with special characters.

 

Thank you for the attention and Happy New Year

12-30-2016 10:33 AM

So I just tested the MYSQL stuff again, I have to believe it comes down to the user settings for MYSQL. Maybe the probe is connecting just a little different than the data_engine?  I have attached a screenshot of the user configurations for MYSQL that I have tested. As you can see I have the following host allowances for the 'root' user:

  • localhost
  • 127.0.0.1
  • ::1
  • %

 

Here is the output of my probe_of_things debug log.

 

DatabaseConnectionInfo [databaseName=auto_10_238_33_112, databaseServer=10.238.33.250, dbProvider=MySQL, nisJdbcUsername=root, nisJdbcPassword={not_shown_for_security_reasons}, nisJdbcUrl=jdbc:mysql://address=(protocol=tcp)(host=10.238.33.250)(port=3306)/auto_10_238_33_112?allowMultiQueries=true]

 

12-29-2016 10:26 AM

Ok, I will try to slice off some time today or tomorrow and look at the MySQL code. The few environments I've tested on have never ran into this issue and its strange because I pull the configuration directly from the data_engine. 

12-29-2016 08:38 AM

Hi Bryan,

I'm having the same issue in my environment, and I get the following log:

 

Starting USM Group retrieval process
Dec 29 11:26:57:003 [attach_socket, probe_of_things] Request to probe "/***/HUB01/robot01/data_engine" callback get_connection_string was successful.
Dec 29 11:26:57:003 [attach_socket, probe_of_things] Resulting parsed connection data is: DatabaseConnectionInfo [databaseName=ca_uim, databaseServer=**.**.**.**, dbProvider=MySQL, nisJdbcUsername=root, nisJdbcPassword={not_shown_for_security_reasons}, nisJdbcUrl=jdbc:mysql://address=(protocol=tcp)(host=**.**.**.**)(port=3306)/ca_uim?allowMultiQueries=true]
Dec 29 11:26:57:005 [attach_socket, probe_of_things] Database is: MySQL
Dec 29 11:26:57:005 [attach_socket, probe_of_things] Integrated Security: False
Dec 29 11:26:57:007 [attach_socket, probe_of_things] User exception in callback for public void com.ca.uim.field.ProbeMain.automationGetUSMGroups(com.nimsoft.nimbus.NimSession) throws com.nimsoft.nimbus.NimException,java.lang.InstantiationException,java.lang.IllegalAccessException,java.lang.ClassNotFoundException,java.sql.SQLException: java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'**.**.**.**' (using password: YES)
Dec 29 11:26:57:008 [attach_socket, probe_of_things] Exception in ThreadClient: java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'**.**.**.**' (using password: YES)
Dec 29 11:26:57:008 [attach_socket, probe_of_things] java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'**.**.**.**' (using password: YES)
Dec 29 11:26:57:009 [attach_socket, probe_of_things] Successful callbacks: []

 

And the probe_of_things was deployed on the primary hub that already has full acces to the database.

12-27-2016 08:24 PM

Dear Bryan,

 

How do we can setup the timeout settings? I am getting communication error in the status bar after sometime when i run the threshold_cdm_report. I think this is because of some communication issue between hub and robot but not sure.

 

Please suggest what could be the reason for this?

 

Regards,

IK

12-27-2016 07:07 PM

So this probe supports two different type of configuration pull and storage, the first is thresholds in which it stores specific probe configuration values in format of probe->section->key->value. This format is used so users can determine which metrics and thresholds they are collecting either with the provided reports or SQL queries into the H2 database. The second is the entire probe configuration archive, that is also available in the configuration_archive probe. This feature will create a diff entry in the configuration_changes table each time it detects a different configuration from the previously stored configuration. I just checked the code and it looks like there is not currently a filter on this feature. So it cannot filter by hub, robot or probe at this time. I will definitely add that to the backlog. The configurations are stored in the configuration_current table and the diffs are stored in the configuration_changes table. There are no reports as of now, its also in the backlog.

 

If you have further questions or just want to discuss/webex just shoot me an email.

12-27-2016 06:55 PM

Hi Bryan,

In reading this and the included configuration_archive probe doc, I am a bit confused. Does this probe pull and store complete probe configurations and do the "diffs"? Does the probe exclude list work? Do you have any example diff reports?

12-27-2016 06:05 PM

Dear Bryan,

 

You are too quick to answer the queries. I really appreciate.

 

May be i need to change my browser and check.

 

Thanks for quick guidance.

 

Regards,

IK

12-27-2016 05:53 PM

Each table in each report should have export options in the table header, look at the example screenshot for the threshold reports.

 

Bryan Morrow

Sr Principal Consultant, Technical Sales

Office: 1-636-733-9713 | Mobile: 1-417-693-1250

Bryan.Morrow@ca.com<mailto:Bryan.Morrow@ca.com>

 

<mailto:>[sig_logo][Twitter]<http://twitter.com/CAInc>[LinkedIn]<http://www.linkedin.com/company/1372?goback=.cps_1244823420724_1>[Facebook]<https://www.facebook.com/CATechnologies>[YouTube]<http://www.youtube.com/user/catechnologies>[Google]<https://plus.google.com/CATechnologies>[Slideshare]<http://www.slideshare.net/cainc>

12-27-2016 05:49 PM

Dear Bryan,

 

After changing the log level, I am able to see the results in html file. Could you please suggest me how do i can get te required report in csv or excel?

 

Regards,

IK

12-27-2016 05:19 PM

If you set the loglevel to 4 using raw_configure, do you see any errors or success entries in the probe log while running the threshold_get_configs callback?

12-27-2016 05:13 PM

Dear Bryan,

 

I did the same in my test lab where i have 8 servers under monitoring, however it is not showing any value in the cdm_thresholds.html file. What could be the reason. any idea?

 

Thanks & Regards,

IK

12-27-2016 11:10 AM

Step 1) Verify the CDM thresholds are configured to be collected by opening the probe configuration in raw configure mode, and look for the section named profiles. Under profiles there should be a cdm section.

Step 2) Open the Probe Utility from Infrastructure Manager (ctrl-p) or Admin Console (probe drop-down).

Step 3) Run the threshold_get_configs callback (you can provide a CSV list of hub names, not addresses, if you'd like to filter which hubs to pull thresholds from).

Step 4) Once threshold_get_configs has completed (watch the logs), you will run threshold_cdm_report

Step 5) Navigate to the probe installation folder and then the /reports directory, look for the cdm_thresholds.html

12-27-2016 10:35 AM

Looks Great.

 

How do i can generate Disk threshold report?

 

Regards,

IK

12-14-2016 09:49 PM

It is solvable, just not as easy as running gethubs. I have solved this in the past, just haven't had a chance to update it and add it to this probe. 

12-14-2016 04:47 PM

It would be useful to expand the vmware topology mapping capability to be able to map the hub relationships, for such things as tunnels, distsrv\ADE forwarding, queues, etc

 

Each hub can already calculate it's relationship to all the other hubs it knows about using the gethubs callback, so translating this into a map should be solvable.

 

Perhaps also being able to provide a robot or a queue name, and showing the bus path to that object from the primary hub, data_engine, attach queue, etc.?

 

e.g. Data path for \Domain\hub4\robotZ

   SLM --> data_engine --> primaryhub --> tunnel --> hub2 --> tunnel --> hub3 --> tunnel --> hub4 --> robotZ

e.g. Queue path for \Domain\hub2\discovery_agent

   discovery_server --> primaryhub --> tunnel --> hub1 --> tunnel --> hub2 --> discovery_agent

12-07-2016 09:38 AM

Version 1.13 updated. Includes two new features.

  1. Integration of the configuration_archive probe. This is just a consolidation of features to better centralize my development efforts. 
  2. HTML 5 report for top 15 monitoring probes, optional callback for origin filtering.
    1. Due to my availability recently I have only added this report for MSSQL.

 

In the next version I will work on making sure the MYSQL code is added for all the queries and various bug fixes.

 

Thanks,

 

Bryan

12-05-2016 05:49 PM

Hello,

I believe we do but not mysql expert. I did a select host,user,password from mysql.user

 

12-05-2016 05:01 PM

Have you checked the MYSQL user settings to see if 'root'@'opsmonp03.ucop.edu' has access? I've had another user point this out by every time I have tested MYSQL I haven't had any issues BUT I am testing in small controlled lab environments.

12-05-2016 04:53 PM

I tried to run the USM group list, but got a "Communication error". If this is the same login as data_engine, could the error be network based, like IP tables preventing that communications from x to y server?

 

 

Dec 05 13:48:57:629 [attach_socket, probe_of_things] User exception in callback for public void com.ca.uim.field.ProbeMain.automationGetUSMGroups(com.nimsoft.nimbus.NimSession) throws com.nimsoft.nimbus.NimException,java.lang.InstantiationException,java.lang.IllegalAccessException,java.lang.ClassNotFoundException,java.sql.SQLException: java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'opsmonp03.ucop.edu' (using password: YES)
Dec 05 13:48:57:631 [attach_socket, probe_of_things] Exception in ThreadClient: java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'opsmonp03.ucop.edu' (using password: YES)
Dec 05 13:48:57:631 [attach_socket, probe_of_things] java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'opsmonp03.ucop.edu' (using password: YES)
    at org.mariadb.jdbc.internal.SQLExceptionMapper.get(SQLExceptionMapper.java:134)
    at org.mariadb.jdbc.internal.SQLExceptionMapper.throwException(SQLExceptionMapper.java:106)
    at org.mariadb.jdbc.Driver.connect(Driver.java:100)
    at java.sql.DriverManager.getConnection(Unknown Source)

11-23-2016 03:28 PM

Thought it might be - thanks

 

 

James Christensen|Sr. Services Architect

 

CA Technologies |3965 Freedom Circle Suite 1100 |Santa Clara, CA 95054

Mobile: +1 650 766 5514 | james.christensen@ca.com<mailto:james.christensen@ca.com>

 

 

Vacation Alert: Nov. 30 – Dec. 7

 

<mailto:>[CA]<http://www.ca.com/us/default.aspx>[Twitter]<http://twitter.com/CAInc>[Slideshare]<http://www.slideshare.net/cainc>[Facebook]<https://www.facebook.com/CATechnologies>[YouTube]<http://www.youtube.com/user/catechnologies>[LinkedIn]<http://www.linkedin.com/company/1372?goback=.cps_1244823420724_1>[Google]<https://plus.google.com/CATechnologies>[Google+]<http://www.ca.com/us/rss.aspx?intcmp=footernav>

11-23-2016 03:25 PM

It's the configuration_archive.pdf now, forgot to update the post.

 

Sent from my iPhone

11-23-2016 03:22 PM

Hi Bryan,

I don't see a threshold_configurations.pdf attached here.

11-21-2016 11:24 PM

Forgot to mention, I also added a probe pack detail report to the licensing callback. This should list the hub->robot->source/ip of the device using the license.

11-21-2016 01:13 PM

Thanks Bryan

11-21-2016 12:49 PM

That is a pretty significant amount, that would probably be better off ran by a DBA or someone during a maintenance window.

 

Devices without Data: select q.source, q.target, q.robot, q.origin, q.probe, q.qos, s.sampletime from s_qos_snapshot s inner join s_Qos_data q on s.table_id=q.table_id GROUP BY s.table_id, q.source, q.target, q.robot, q.origin, q.probe, q.qos, s.sampletime having datediff(day, max(s.sampletime), getdate()) > DAYS

 

Delete qos by target: DELETE FROM S_QOS_DATA WHERE target like 'target'

 - This one needs to be updated to delete the raw data and then the S_QOS_DATA

11-21-2016 12:41 PM

Bryan

 

Could you provide the querys used by the commands below?

 

healthcheck_get_devices_no_data?

Delete QOS by target list

 

I have about 500,000 metrics to delete without data for more than 24 months, if I plan on parts of 10,000, I will not hurt performance so much

11-21-2016 12:09 PM

You can use the target column from that report yes. However, if there is a large amount of targets you may want to break it up. I'm not sure how large a string the probe_utility can accept.

11-21-2016 12:01 PM

Hi Bryan

QOS by target list: I can use the csv generated by healthcheck_get_devices_no_data?

 

If not, could you give an example of a correct use?

Thanks for the new version

11-21-2016 11:40 AM

Version 1.12 uploaded. Added the ability to delete QOS by target list, Inactive Probe report, the option to filter threshold gathering by CSV list of hub names and filter probe threshold reports by USM groups.

11-17-2016 11:54 AM

Adding that feature now, should be in the next release early next week.

11-17-2016 11:34 AM

Humm OK.
In my case, I do not need to completely remove the robot

Would it be possible in the future to probe remove by target through the csv list?

Thank you

11-17-2016 10:46 AM

Thanks Bryan.

11-17-2016 10:05 AM

You just need to provide the device/robot name in this field. So from your example if you wanted to remove the ENTIRE bpo01250 robot and its data you would enter the following:

 

device_list: bp01250

delete_qos: yes

 

This would remove the bp01250 robot from discovery and all of its relevant QOS data. 

 

If you just want to remove the QOS, you will need to do that from the SLM or direct SQL queries.

11-17-2016 10:02 AM

Yes, the HDB and spooler probe security issues are well known. I have the foundation laid for searching the hubs and looking for bad probes, I just haven't implemented it yet.

 

Thanks for the suggestion.

11-17-2016 10:01 AM

I think its noted somewhere in the documentation that some of the queries are just placeholders until I determine the billing method and how to collect that information. I will look into the SAP billing in the next revision.

11-17-2016 09:13 AM

Hi Bryan

 

Sorry,  did not understand, could you give me an example based on the return of my list below?

 

source,"target","robot","origin","probe","qos","sampletime"

bpo01250,"bpo0135\361.877940.bpoDMZ59","bpo01250","BPO-Hub01","sqlserver","QOS_SQLSERVER_LONG_QUERIES","28-Aug-2015 13:04:15"

 

bpo01250,"bpo0135\1379.877316.bpoDMZ52","bpo01250","BPO-Hub01","sqlserver","QOS_SQLSERVER_LONG_QUERIES","28-Aug-2015 13:04:15"

 

bpo01250,"bpo0135\1598.877316.bpoDMZ52","bpo01250","BPO-Hub01","sqlserver","QOS_SQLSERVER_LONG_QUERIES","28-Aug-2015 13:04:15"

11-17-2016 07:46 AM

Hi Bryan

 

 

I would suggest a report:
Robot Status: Unfortunately sometimes the hdb spooler probes stop working, Only the controller probe remains UP

Maybe also include probes with status in red

 

In environments with many robots, a report would help identify robots that need attention

Thank you and congratulations for the great job

11-17-2016 06:23 AM

Hi Bryan,

 

I think, maybe the license pack need to be updated, a little suggestion:

 

I found a wrong probe name on queries u are using:

select count(distinct robot_id) as count from CM_NIMBUS_PROBE
WHERE probe_name in ('sapbasis_agentil', 'sapbasis')
AND last_action >= DATEADD(DAY, -30, GETDATE())

 

sapbasis = sap_basis

 

And I don't know if we can count only how many probes are deployed, like azure or aws, I remember we need to count all destination devices, not only robot running probes, is this true?

 

Thanks for your excellent job.

 

Best

11-16-2016 02:57 PM

Not in the current version. I will look at adding this in the next release.

11-16-2016 02:53 PM

Hi Bryan, this is a very good probe.

 

Is it possible to export details for License Report, like server and device names and ip.?

 

Thanks, best

11-10-2016 10:32 AM

First in regards to the MYSQL connection, I am pulling the information directly from the data_engine so the username and password should be correct. I will have to look at the code a little more in depth to see what might be happening. If you change the loglevel to 4, what does the JDBC connection string show?

 

Second, for the H2 tables you'd need to run the "licensing_run_all" callback, this populates the other two tables. The licensing callback should generate the index.html page in the reports as well.

 

If you want to email me directly to work on this, I can be reached at bryan.morrow@ca.com

 

Thanks,

 

Bryan

11-10-2016 10:25 AM

Hi Bryan,

  I have a Linux installation connecting to a remote Linux MySQL machine. I installed the probe_of_things on the primary hub to to try it out. A number of the callbacks fail due to access denied as there seems to be an assumption within the JDBC connection string back to the MySQL database to use the same password to login to to the DB as that of login into the remote database machine itself. This is not the case in my setup with the MySQL root password being different to the root password for the DB box. Is there a way of configuring the connection string or could you let me know where the probe building the connection string from?  

Also, there seems to be 3 main tables in the H2 DB. UIM_PROBE_PACK_SUMMARY, THRESHOLD_CURRENT and UIM_ROBOT_SUMMARY. Currently, I can only get data into THRESHOLD_CURRENT. This maybe because of the failure of the callbacks related to the above. Could you let me know which callbacks should populate these tables so I know whether they're working OK?

Connection string looks correct but the error I'm seeing is

Nov 10 20:23:13:384 [attach_socket, probe_of_things] Exception in ThreadClient: java.sql.SQLInvalidAuthorizationSpecException: Could not connect: Access denied for user 'root'@'<servername>' (using password: YES)

 

 

Jon

11-08-2016 04:08 PM

For now you should be able to put a CSV list of hub names (not full addresses) in the .CFG file. Use the hublist key. Next revision I will add an option to put that list into the callback itself, and just the default will be all hubs.

 

As for the USM group, I'll look into that as well.

 

Thanks for the feedback.

 

Bryan

11-08-2016 04:00 PM

Hello Bryan,

 

Thanks for all this neat work! I was curious is there will be or there is (?) a way to limit the scope of a threshold report (for CDM for example). Can I specify the robots/hubs I want to pull from instead of my entire environment?

 

In theory, If you could input a scope, could you somehow retrieve the USM group list from cm_members and cm_dynamic tables to run reports on specific groups? Just thinking out loud now here, thanks again.

 

Alberto

11-08-2016 02:06 PM

Status (4) is a not found, so either the robot doesn’t exist or the route is bad? Maybe its connected to a different hub now?

 

Bryan Morrow

Sr Principal Consultant, Technical Sales

Office: 1-636-733-9713 | Mobile: 1-417-693-1250

Bryan.Morrow@ca.com<mailto:Bryan.Morrow@ca.com>

 

<mailto:>[sig_logo][Twitter]<http://twitter.com/CAInc>[LinkedIn]<http://www.linkedin.com/company/1372?goback=.cps_1244823420724_1>[Facebook]<https://www.facebook.com/CATechnologies>[YouTube]<http://www.youtube.com/user/catechnologies>[Google]<https://plus.google.com/CATechnologies>[Slideshare]<http://www.slideshare.net/cainc>

11-08-2016 02:02 PM

How about this one?
Nov 08 12:13:36:700 [pool-1-thread-3, probe_of_things] Failed to retrieve the list of probes from robot Robot_Name. We will skip discovering this robot for now. Cause: Received status (4) on response (for sendRcv) for cmd = 'nametoip' name = '/Domain_Name/Hub_Name/Robot_Name/controller' 

11-08-2016 12:18 PM

1) Unfortunately, I will not be attending CA World this year.

2) As for the communication issue, this is something I'll have to look into. I'm not really sure how I could pull this off without an 'agent' on the remote hubs.

11-08-2016 12:04 PM

BryanKMorrow - Great probe! Thank you for all your efforts. This consolidates several of the lua scripts we had floating around. 

 

One question though - Any plans or ability to leverage the communication path from the hubs that the robots are connected to? 

 

We're getting a lot of errors like this one: 
Nov 08 12:01:33:746 [pool-1-thread-2, probe_of_things] Failed to retrieve the list of probes from robot ServerName. We will skip discovering this robot for now. Cause: Unable to open a client session for ***.***.***.***:48000

 

Unfortunately, we do not have a single hub/robot that can communicate to everything in our network. We have a lot of different subnets that require a hub to reach the robots within them.

 

Thanks again,

Jason Eckelstafer

 

PS - going to CA World by any chance?

11-08-2016 11:02 AM

New version 1.11.

 

Includes two new threshold reports: processes and ntservices. Also includes the ability to provide a robot and probe name and retrieve the log files and configuration file which gets compressed into a ZIP file. This should help with retrieving remote probe logs for support cases.

 

Thanks,

 

Bryan

10-27-2016 10:49 AM

Thanks for the cdm_thresholds report, I'm very interested to see other probes added (like url_response, processes,..)

10-27-2016 09:25 AM

I had the same problem, the ALIVE_TIME column in the CM_NIMBUS_ROBOT does not store the alive time of a robot although its name is ALIVE_TIME. In the newer versions of UIM the frequency in which that column gets refreshed has dramatically decreased.

I have been investigating and the best way to retrieve the last alive_time of a robot is by executing the hub callback “getrobots”, the output has a “lastupdated” field with the last alive_time of a robot. The “lastupdated” field is a unixtimestamp UTC+0, so you will have to convert it to your local time.

10-26-2016 05:58 PM

Ok, new version uploaded to the post. The robot calculations are now done by iterating through each hub and running the getrobots callback. The new field, qos for the last 30 days is done via SQL.

10-26-2016 04:46 PM

Seems my interpretation of the alive_time field in the CM_NIMBUS_ROBOT table was incorrect. I will post an updated version with the appropriate queries shortly.

10-26-2016 04:30 PM

To get the CDM threshold report, you'll need to run the other threshold_get_configs callback first. This callback looks at the <profiles> section of the probe_of_things configuration and stores the configured probes->sections in the embedded H2 database. This will loop through all of your hubs->robots and look for those specific probes. The best place to install the probe is on a robot that has direct TCP access to the UIM database so it can pull the UMP and licensing reports.

10-26-2016 04:26 PM

Hi Bryan, maybe I'm missing this but where do we deploy this probe? I just threw it onto my laptop and opened Probe Utility and tried running the CDM report option. Said successful, but when I view the html report, its empty? Does this have to be running on a hub, or primary hub?

I don't see a Setup section in the PDF.

The pics look great and this would be very helpful in our environment.

Thank you,

Dan

10-26-2016 03:26 PM

I should also comment as this is more to help with license capacity than troubleshooting offline robots. The goal is to provide an accurate count of the robot counts for the last 30 days. So the report shows the following:

 

Total Robots Online Last 30 Days -Total Robots Online Currently - Total Robots Offline (Last 30 days vs online now)

 

So your monthly capacity max for robots is the first column, the second being your current capacity and the third being the difference.

10-26-2016 03:17 PM

I would run the following query to look for machines that haven't checked in the last 30 days: 

MSSQL - select COUNT(*) as count from CM_NIMBUS_ROBOT where alive_time <= GETDATE()-30

MYSQL - select COUNT(*) as count from CM_NIMBUS_ROBOT where alive_time <= (CURDATE() - INTERVAL 1 MONTH )

 

From there I would verify the robots don't actually exist, as its possible for the database checkins to not work properly sometimes. If the robots don't exist, run the same query from above but with the delete option.

MSSQL - delete from CM_NIMBUS_ROBOT where alive_time <= GETDATE()-30

MYSQL - delete from CM_NIMBUS_ROBOT where alive_time <= (CURDATE() - INTERVAL 1 MONTH )

10-26-2016 03:12 PM

So we need to remove the old robots from DB ,if means how to remove it .

10-26-2016 02:57 PM

Its possible you have a bunch of old robots in the CM_NIMBUS_ROBOT table, the current offline robot calculation takes the total number of active robots in that table (could be tons of non-existent robots) and subtracting it from the current number of robots online.

 

Take a look at the CM_NIMBUS_ROBOT table and see how many robots it returns.

 

select COUNT(*) as count from CM_NIMBUS_ROBOT where robot_active=1

 

This is just a first draft on the licensing, so I'll be improving the calculations on the next version.

10-26-2016 02:50 PM

Bryan,

 

I am getting 392 robots inactive in the setup.But I could see only 20 alerts in UMP.

 

Regards,

10-26-2016 11:28 AM

Major update posted, version 1.10. New features include:

  • niscache clean callback (includes robot restart)
  • probe configuration modification from JSON
  • UMP/USM health report
  • Collect and store threshold related information
    • Please use attached threshold_configurations.pdf to configure additional probes for collection
  • Generate CDM threshold report
  • Create VMware topology report

 

 

Screenshot samples included for vmware topology, cdm threshold and UMP/USM health reports.

 

The UMP/USM health report should work fully on MSSQL and partially on MYSQL, no Oracle support yet.

 

PLEASE let me know if you run into any issues.

 

Thanks!

 

Bryan

10-05-2016 10:50 AM

New update just posted. New license pack HTML 5 report and the ability to modify probe configurations from a central probe.

09-27-2016 03:25 PM

Thanks Bryan.

09-23-2016 11:00 AM

Just paste the comma-separated list into the field, not an actual file path. So instead of C:\temp\file.txt, paste:

 

device1,device2,device3

 

Bryan

09-23-2016 07:55 AM

How should I spend the CS file? the full path?

Ex: C:\temp\list.cs

09-22-2016 05:27 PM

You would paste a CSV list into the callback field in the probe utility. If it is a single device, the utility will delete that single device. If it is a CSV list, it will loop through each one. 

09-22-2016 05:17 PM

Hi Bryan,

Can you explain how to use the parameter "automation_device_wiper" to a CSV file?

 

Thanks,

09-09-2016 03:39 PM

Uploaded 1.04. This version includes two more HTML reports for UIM Users and Account Contacts. The automation_device_wiper callback now supports a CSV list of devices to remove. 

07-28-2016 10:52 AM

Niiice!!!

07-28-2016 10:21 AM

1.03 just uploaded, added the automation_device_wiper callback. This allows you to remove a device from discovery by providing the device name with an option to also delete the raw QOS data.

07-26-2016 01:25 AM

That was fast!

Thank you!

07-25-2016 12:41 PM

Done. Version 1.02 is now attached and always creates a CSV.

07-25-2016 07:50 AM

Hi

I appreciate the automation_devices_no_data function and the upcoming device deletion in USM.

I would prefer to always get a .csv file as result. Maybe you can add it as option?

Leandro

07-22-2016 02:41 PM

Version 1.01 uploaded and documentation modified.

 

You can now create a list of devices with no data within a specified number of days.

07-22-2016 12:09 PM

Sounds good! Looking forward to "Return devices without data by providing an age in days."

 

Cheers,

 

A

07-22-2016 12:03 PM

Sounds interesting. I think you meant "This probe will be under constant development with new utilities being added regularly."