Idea Details

Enhancement Request: Identity Manager Provisioning Service (TCP 20389/20390) Load-Balance

Last activity 20 days ago
Alan Baugher's profile image
07-22-2019 12:39 PM

BACKGROUND:

1- The CA Identity Manager IME may have awareness of multiple IM Provisioning Servers but these are NOT used for load-balancing, but failover only.

2- The IM Provisioning Servers have minimal knowledge of peer servers.  
Information shared between IM Provisioning Servers is limited to the shared Provisioning Directory DSAs.


CHALLENGE:    
 As the number of transactions with provisoning events scale, the need for the provisioning tier to scale is required.   Current default configurations may have 1-N number of J2EE servers communicating to a SINGLE IMPS (IM Provisioning Servers).

One method to address the default challenge, is the use of pseudo hostnames where the "primary IMPS server" may be different for every J2EE by using a different IP address for the psuedo hostname in the local J2EE OS host file.
-  This configuration does NOT offer load-balancing, but does have the advantage of using multiple IMPS servers during transactions.
- This configuration relies on the J2EE load-balancing feature for any top-driven business logic, that may have provisioning, to use a dedicated IMPS server for every J2EE server.

REQUEST(S)

- Load-Balancing configuration for the Identity Manager Provisioning Tier/Service (TCP 20390/20390).
- May be delivered as READ-ONLY for a first version.
-  This would be expected to increase overall performance where 95% of all transactions observed are queries to the provisioning tier.

- Later release may be delivered with full load-balancing, where a messaging bus or similar would be used to manage any out-of-order challenges.


EXAMPLE(S):
- Perhaps a documented configuration to use "load-share" feature as an intermediate router between the J2EE servers and the IMPS servers.
https://community.broadcom.com/communities/community-home/digestviewer/viewthread?MID=794818#bm61044fe5-aa71-423b-9a45-f4bfa7bfff31



Comments

20 days ago

Thanks Alan, understood now.

21 days ago

Hi Summeet,

Short answer:  Not likely, as this architecture is very similar to the existing model.   

All IMPS servers (on the same server or remote servers) are communicating via a local or remote DSA router , which will manage communication to the IMPD 4x DSAs.

The only obvious contention will be if the co-located servers for both IMPS services has less than four (4) virtual CPU, where we can imagine that one (1) IMPS slapd service per cpu will be consume 100% (25% per top metrics for 4 vCPU system), and 1-2 cpu for the CA Directory DSA router that would be used by the IMPS services.   The remaining vCPU would be shared for existing resources on the same server.

The value statement of using CA Directory as the provisioning store, is it is a true X.500 directory, where last update "wins".   So unlikely to seen any collision when writing data.



To validate the above statements:   

We can execute the DXHOME/samples/dxsoak command with 200 threads with a dxmodify to the exact same metrics to both TCP 20389 / 30389 slapd services. 

   If we create additional shells, we can execute this command multiple times, to span 1600-3200 threads.  (800-1600 per each IMPS slapd service).   

   [This # assumes we have resized the OS ulimit from 8192 to ulimit -s 1024 and updated im_ps.conf from default of 200 threads to 800-1600 threads]



25 days ago

@Alan Baugher Hi Alan, if multiple imps nodes participate in write operations at impd store, do you think there would be a race condition, where 1st imps node and 2nd imps node may try to write same record? 

Thanks,
Sumeet