So I do know what probe_discovery messages are. They're compressed JSON data describing the devices being monitored by the probe. The discovery server processes them and updates the CM_ and possibly other tables accordingly.
Acronym soup.
TNT = the next thing NIS
TNT2 = NIS2 the next thing 2
TNT3 = NIS3 ...
I'm fuzzy on where TNT and TNT2 are separated, but up through TNT2, it's all using niscache on the robot. Discovery_server periodically polls, collects, and cleans up some of that cache while publishing details to the CM tables.
The data is a combination of object style thingies.
device = is a device presumably with an IP.
dev_id = encoded of device id, unique identier for device
ci = is typically a component on a device, like eth0
ci_id = encoded ci id, unique identifier for a ci
ci_type = this is equivelent to subsysid assigned in the nas and is something like System.Disk 1.1 or whatever.
Metric_Item = available metrics under an ci_type. Octets in, octets out etc.
MI_id = numeric value of an MI
ci_type_id = numer value of ci_type or aka subsysid
metric_type_id = combination of ci_type_id:metric_type_id 1.1:39 System.disk:Read In
metric_id = an encoded measurable instance of a metric_type_id for a specific ci on a specific device. Network.InterfacectetsIn eth0
This is basically TNT2. discovery server collects this data, and publishes it to the CM tables. The formal units and types and associations allow for automated reporting in proper formats associated with the devices and components of the devices being reported on.
The down side is that all of the met_id, ci_id, and dev_ids end up as files in a flat directory on the robot ./niscache which can become an io bottleneck when you are monitoring things like switches.
TNT3 adds the probe_framework which is the bases for new probes going forward including snmptoolkit, icmp... and vmware.
It allows the developer of the probe to discover the met_id objects and organize them logically into objects and containers with some other attributes that allow for automated generation of the configuration "gui" they keep calling it, in admin console. This device topology published as a "graph" under the subject probe_discovery. It's compressed JSON that goes to the discovery_server for processing.
ppm fits in their somewhere. Maybe something to do with applying Canonical Topology Description (CTD) to the topology information in a probe to generate the config gui. I think maybe ppm is like a bridge of some sort between cfg, ctd, graph or old nis2 probes not built on Probe Framework, but who knows?
The gateway to TNT2 is using ciopen cialam and metric this that the other functions instead of the much simpler nimalarm ... This buys you magically configured graphs associated with your device in USM from a custom probe like magic for the extra effort.
TNT3 is all probe framework. I haven't sifted it all out yet, but the outputs appear to be very much a work in progress. The whole direction is fairly promising.
Side note: If you have a HUGE vcenter, the compressed graph in probe_discovery messages can exceed 1MB. This is significant due to a bug in hubs prior to the 7.x series where a lazy megabyte, 1000000 bytes, was a hard coded maximum in an internal hub routine that took messages off the spooler in_ queues and pushed them to the hub for processing. The internal spooler would accept the message and say ok to the sender, then bork when trying to send it to the hub and you would lose anywhere between zero and 19 other messages as collateral damage due to a hard coded bulk size of 20 in the operation without any alert.