vSphere Storage Appliance

 View Only
  • 1.  Need large VM disk, but will block size be a limitation?

    Posted Aug 13, 2010 09:48 AM

    I'm planning to virtualize a large file server we use as a 'home folder' area for the site. The physical box has 2-3TB of space, and I'm planning to size the VM at about 3TB. I've created two 1.5TB iSCSI target's on our storage server, and added one to our vSphere 4.1 host, then added an extent of the other one. This means I see 1 2.93TB LUN, and this part is working ok.

    Now I need to add a disk to the VM (Windows 2008 R2), but am obviously limited by the block size. When formatting the 3TB LUN, I selected the defauly 1MB block size, which means the max size volume I can (currently) add to the VM is 256GB, which is not what I need. I understand if I increase the block size to the max 8MB, I can create a 2TB volume, which is much nearer the mark, but I'm concerned that, if having an 8MB block size, will this be efficient enough? As this is a home server, there will be 100's thousands of small files, which could be a potential problem. I've read about sub-block allocation, to optimize smaller file storage, but I don't know if this is available by default, or whether it's O/S dependant.

    The other alternative I have is to create multiple 256GB vmdk's, but this is not how I envisaged it working. I've also thought about using RDM's, so would this be an alternative worth considering? There's no data on the LUN at the moment, so I can re-format/delete if necessary.



  • 2.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 13, 2010 09:50 AM

    You can use iSCSI software initiator from Windows, so you won't be limited by block size at all.


    ---

    MCSA, MCTS Hyper-V, VCP 3/4, VMware vExpert

    http://blog.vadmin.ru



  • 3.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 13, 2010 12:38 PM

    Thanks for the suggestion. I used to use soft initiator's in Windows guests to connect to iSCSI storage, but it's something I'm trying to move away from.

    Maybe you could tell me, if I use RDM's to map both 1.5TB iSCSI disks to the VM direct, then stripe them in Windows 2008 to create 1 large 3TB volume, would this be a potential solution? Would this bypass the blocksize limitation with trying to put the virtual disks on VMFS stores?



  • 4.  RE: Need large VM disk, but will block size be a limitation?

    Broadcom Employee
    Posted Aug 13, 2010 01:36 PM

    only fyi of limitations and block sizes

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003565

    *If you found this information useful, please consider awarding points for "Correct" or "Helpful"*



  • 5.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 13, 2010 01:40 PM

    , if I use RDM's to map both 1.5TB iSCSI disks to the VM direct, then stripe them in Windows 2008 to create 1 large 3TB volume,

    The size limitation ONLY applies when you create a VMFS datastore, like Fiber or iSCSI. Since you have Windows, forget iSCSI make it an NFS datastore, make it one BIG volume, and ESX won't care how big it is and you aren't limited to block size or limited to 2TB, its just NFS space.



  • 6.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 16, 2010 12:37 PM

    >I understand if I increase the block size to the max 8MB, I can create a 2TB volume, which is much nearer the mark, but I'm concerned

    >that, if having an 8MB block size, will this be efficient enough? As this is a home server, there will be 100's thousands of small files, which

    >could be a potential problem. I've read about sub-block allocation, to optimize smaller file storage, but I don't know if this is available by

    >default, or whether it's O/S dependant.

    I do not think you will have worry about the block size in that matter. The block size is for the VMFS file system which will just hold your very large vmdk file (and a small number of other files, which will be sub-block allocated.) Inside your vmdk file you will have the Windows native file system NTFS with a certain cluster size and it will only be that cluster size that could suffer from your 100.000 small files.

    Since the default NTFS cluster size is just 4kb it will normally not be a great problem, which 8MB of course should be.



  • 7.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 17, 2010 04:11 PM

    Thanks for the replies. Lots of useful information there. I read a document recently talking about performance differences between VMFS and RDM volumes, and it seems like VM are making a big play on how good VMFS is now. It still seems to lag in some areas though, and in this case, because raw performance is going to be a big thing, I've elected to map both 1.5TB LUNs as RDM's, then stripe them in Windows to create a single 3TB NTFS volume.

    I've done some testing, and bearing in mind this is a standard Win2k8R2 VM with a single vCPU and 2GB vRAM, I got 40-50MB/s write and 100-110MB/s read over iSCSI, which I think is pretty good.

    Still doing some more testing, but in this config, block size doesn't even become an issue.



  • 8.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 17, 2010 04:18 PM

    I've elected to map both 1.5TB LUNs as RDM's, then stripe them in Windows to create a single 3TB NTFS volume.

    You don't NEED to stripe it. Use NFS. iSCSI is an Ethernet protocol, so is NFS. Instead of setting up iSCSI to attach the volume, just use NFS.

    You obviously have the bandwidth, make it easier, forget LUN's forget VMFS, ALL you need is NFS pointing to a 3TB volume on the SAN, that's it you are done. No block size, no NO, no iSCSI target, no VMFS. Nothing, just a place for your files. Done.



  • 9.  RE: Need large VM disk, but will block size be a limitation?

    Posted Aug 19, 2010 08:47 AM

    Seems like your a huge fan of NFS for VMware use. I do use it, mainly for connection to OpenFiler servers, and on the whole, it works very well, but in this case I feel iSCSI will offer me some benefits, mainly;

    1. It's a true, modern, SAN block I/O network protocol, designed from the ground up for one thing.

    2. With iSCSI, I get the advantage of multi-pathing failover and round-robin load balancing. We will also be using the same front end protocol on our SAN as well as our new NAS that this storage is being served from, making support easier.

    I also don't have enough Ethernet ports on the hosts in the cluster to support NFS and the iSCSI config I have in mind, so unfortunately in this instance, I'm going to have to drop NFS.

    Thanks for the guidance