VMware vSphere

 View Only
  • 1.  PCIe SSD Disks

    Posted Jan 30, 2018 12:49 PM

    I tryed to replace my SAS disks in Cisco UCS C420M3 but the performance is quite poor also in any raid... Tryed raid 10,5,1.

    From that reason i am thinking about SSD disk to PCIe slot.

    That i am thinking about

    2x Kingston KC1000 960 GB PCIe

    or

    2x Intel SSD 900p 480GB PCIe

    or

    Intel® SSD DC P3520 Series (1.2TB, 1/2 Height PCIe 3.0 x4, 3D1, MLC)

    What do you think about it?

    My stupid question is how to make a raid from them?For eg. Raid 1. I never worked with PCIe SSD before.

    The main reason is that i want to override RAID controller in my server, becouse i think that is the reason of slow storage in esxi.

    Thanx for help.

    Br

    Dave



  • 2.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 02:23 PM

    I tryed to replace my SAS disks in Cisco UCS C420M3 but the performance is quite poor also in any raid... Tryed raid 10,5,1.

    I would focus on solving this problem rather than throwing new hardware at it. Is this a production system or just a home lab system? If it's production, you shouldn't be running VMs off local storage to begin with as that's only going to lead to more issues.



  • 3.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 03:01 PM

    Why do you prefere external storage?

    With internal storage i have aroud 70K IOS for random reading and 9K IOPS for write, but if i use:

    dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync

    For 1 gig file i have sometime 60MB/s, sometime 120MB and the maximum was 220MB/s

    with:

    sudo hdparm -Tt /dev/sda

    Timing cached reads: 7000MB/s

    Timing buffered disk reads: 150-250 MB/s

    And its are 4x SSD HDDs in RAID 10



  • 4.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 03:19 PM

    Why do you prefere external storage?

    Because of several important factors: Availability and Resiliency. If you use only local storage, if the ESXi host dies or the storage within it dies, you've got nothing (unless you're synchronously replicating that data to another node; or if you want to wait to restore from backup). Either of those scenarios renders data unavailable. And resiliency, an external storage array is going to be far more resilient than local host storage. If a controller fails, assuming redundant controllers then another will pick up and service I/O requests. If your setup here is for production VMs, you should really not be using local storage, regardless of how good the performance may be.

    Second, your dd test is highly rigged against you.

    dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync

    In this command you specify a block size of one gigabyte. That's huge and not realistic for any sort of block size the storage subsystem would normally encounter. If you want to control the output file size you should use the count parameter which, when multiplied by the block size gives you total output file size. Second, you're declaring dsync writes on output which bypasses any write cache and waits for the block size specified to be returned after being written. This parameter is not indicative of real-world performance behavior.



  • 5.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 04:02 PM

    On UCS V420M3 i have dual flash memory only for vmware. It is separated from local storage. I hope it make the solution safer.

    Here are fio stats:

    ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread

    Starting 1 process

    test: Laying out IO file(s) (1 file(s) / 4096MB)

    Jobs: 1 (f=1): [r] [100.0% done] [347.1M/0K /s] [89.6K/0  iops] [eta 00m:00s]

    test: (groupid=0, jobs=1): err= 0: pid=6134: Tue Jan 30 16:52:52 2018

      read : io=4096.0MB, bw=353264KB/s, iops=88316 , runt= 11873msec

      cpu          : usr=10.31%, sys=89.15%, ctx=1158, majf=0, minf=69

      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

         issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0

    Run status group 0 (all jobs):

       READ: io=4096.0MB, aggrb=353264KB/s, minb=353264KB/s, maxb=353264KB/s, mint=11873msec, maxt=11873msec

    Disk stats (read/write):

        dm-0: ios=1046121/3, merge=0/0, ticks=162248/0, in_queue=162468, util=99.18%, aggrios=1048576/2, aggrmerge=0/1, aggrticks=162492/0, aggrin_queue=162364, aggrutil=98.92%

      sda: ios=1048576/2, merge=0/1, ticks=162492/0, in_queue=162364, util=98.92%

              ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

    test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64

    fio-2.0.9

    Starting 1 process

    Jobs: 1 (f=1): [m] [100.0% done] [136.7M/47436K /s] [34.1K/11.9K iops] [eta 00m:00s]

    test: (groupid=0, jobs=1): err= 0: pid=6145: Tue Jan 30 16:55:59 2018

      read : io=3070.9MB, bw=144364KB/s, iops=36090 , runt= 21782msec

      write: io=1025.2MB, bw=48194KB/s, iops=12048 , runt= 21782msec

      cpu          : usr=8.54%, sys=46.39%, ctx=70414, majf=0, minf=4

      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

         issued    : total=r=786133/w=262443/d=0, short=r=0/w=0/d=0

    Run status group 0 (all jobs):

       READ: io=3070.9MB, aggrb=144363KB/s, minb=144363KB/s, maxb=144363KB/s, mint=21782msec, maxt=21782msec

      WRITE: io=1025.2MB, aggrb=48194KB/s, minb=48194KB/s, maxb=48194KB/s, mint=21782msec, maxt=21782msec

    Disk stats (read/write):

        dm-0: ios=781794/260943, merge=0/0, ticks=981980/286028, in_queue=1268200, util=99.60%, aggrios=786133/262451, aggrmerge=0/4, aggrticks=987076/287336, aggrin_queue=1274180, aggrutil=99.40%

      sda: ios=786133/262451, merge=0/4, ticks=987076/287336, in_queue=1274180, util=99.40%

    What do you think?



  • 6.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 05:00 PM

    On UCS V420M3 i have dual flash memory only for vmware. It is separated from local storage. I hope it make the solution safer.

    Anything that's host-only and not accessible to other ESXi hosts isn't going to make the solution safer, but it may make it perform better. So, again, while performance is well and good, you have no resiliency built in to the system and cannot even do things like routine maintenance. VMs are anchored to the hosts on which they run so it reduces your availability in several ways.



  • 7.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 10:24 PM

    Ok amd what do you think aboout embeded performance



  • 8.  RE: PCIe SSD Disks

    Posted Jan 30, 2018 11:12 PM

    Embedded performance...of vCSA (embedded PSC)??



  • 9.  RE: PCIe SSD Disks

    Posted Jan 31, 2018 08:27 AM

    Yes ambedded performance



  • 10.  RE: PCIe SSD Disks

    Posted Jan 31, 2018 12:08 PM

    Embedded vCSA performance is very good, although don't deploy anything smaller than the "Small" size, even in a lab environment. The integration of PSC-related services means it has higher resource requirements than the standalone vCenter.