Hi all,
I've honestly looked through as much as I can on the forums and docops but I'm struggling to understand Software Delivery throughput control and good practice scale/size of my ITCM systems. I want to patch over 500 machines each month after Microsoft OS security updates are released and want to understand what CA good practice is for optimal/reasonable throughput.
Two data centres hold all the servers to be patched - no client/workstation PCs.
My networks are fast local LAN (10GB), plus fast (dark fibre) WAN to a second data centre.
I have a single DM and two SS currently. DM and SS in data centre 1 and SS in data centre 2.
Each monthly patch can be just 150MB in size, right up to say 1.5GB.
DM is Win2016 VM with 16GB RAM, 4vCPU, fairly fast standard virtual disks.
SS are both Win2016 VM 8GB RAM, 2vCPU.
500 target servers to patch, optimal patching window is one weekend, with machines currently done over four 'phases', just to break them down and do the machines with less business risk in phase 1, then we can evaluate success, before moving on to servers with greater business risk involved in phases 2,3 and 4.
I just want to be able to work out an efficient model for good patching throughput, balancing the effect on SQL Server, Domain Manager, Scalability Servers and the network (if relevant). If there are good practice policy settings for Patch Manager and/or Software Delivery then great. If there is a spreadsheet to help me do "what-if" assessments of throughput then even better.
Any thoughts on considerations, good practice, config values, architecture options would be welcome.
Kind Regards,
Bob.
------------------------------
Never trust a man who when left alone with a tea cosy doesn't try it on.
------------------------------