Welcome to the Community,
I set up this kind of configuration multiple times with XenApp5 on W2K3, however it should be equal for W2K8.
The configuration I usually have:
- 2 QC Intel CPUs (54xx or better)
- min. 20 GB RAM
- RAID controller with at least 512MB BBWC/BBU (battery backed cache) for write-back operation
- either 2 SAS disks 10k or better in RAID1 (my avarage virtual disk size for XenApp is ~30GB)
- or 4 disks 2 x RAID1 (see reason below)
- 2 NICs (1 vSwitch with both NICs connected to 2 different physical switches)
- up to 4 VMs on one host (1 MS Enterprise license allows you to setup up to 4 VM's on one host)
- 2 vCPUs per VM (this performs better for a XenApp workload than 1 vCPU)
- RAM per VM -> total amount of RAM minus ~2 GB (for the Hypervisor) divided by the number of VMs (e.g. with 16GB and 4 VMs --> ~3.5GB per VM)
- NO RAM OVERCOMMITMENT ! I usually set the memory reservation to the configured amount of RAM. This way I avoid memory ballooning and I also save disk space, because the swapfile does not need any disk space.
- Create the virtual disks as "eagerzeroedthick" (check the FT/Cluster option in the create disk wizard) for better disk performance.
The reason for configuring 2 x RAID1 instead of 1 RAID10 is that most RAID controller cannot split a RAID10 into multiple logical volumes. I like to keep the Hypervisor in its own logical volume, which provides the ability - in case of a desaster - to reinstall it without affecting the VMs on the VMFS volume. Therefore I split the RAID into a 10 GB logical volume (~5 GB for the Hypervisor and scratch partition plus a small VMFS datastore for e.g. ISO files) and a logical volume for the "production" datastore.
André