VMware vSphere

 View Only
  • 1.  Virtual Disks or NFS mounts

    Posted Oct 28, 2011 06:33 PM

    Hi All,

    I'm new to ESXi and setting up Virtual Machines. The question i had is the best practice for providing storage to an application server.

    1. As an example i have a NFS server with the ability to provide 1TB of storage through mounts (shares)

    2. I create a NFS share /vmdks/ and export this as a datastore.

    3. I create a vm with a virtual disk of 4 GB and install say W2K8 on this VM

    4. Now i need to run a database within the VM and need to provide upto say 500 GB of storage for the database

    5. So i create another NFS share /database/ and export this share

    6. Now i have two options to store data on /database/

    7. access the /database/ share from within the W2K8 VM

    8 .create a VMware datastore on the /datastore/, create a new virtual disk of size 500 GB on the newly created datastore and assign this virtual disk to the VM

    Question is which is the preferred, sane and usual option - (7) or (8) ?

    Thx,



  • 2.  RE: Virtual Disks or NFS mounts

    Posted Oct 28, 2011 06:49 PM

    snmstorage wrote:

    Hi All,

    I'm new to ESXi and setting up Virtual Machines. The question i had is the best practice for providing storage to an application server.

    1. As an example i have a NFS server with the ability to provide 1TB of storage through mounts (shares)

    2. I create a NFS share /vmdks/ and export this as a datastore.

    3. I create a vm with a virtual disk of 4 GB and install say W2K8 on this VM

    4. Now i need to run a database within the VM and need to provide upto say 500 GB of storage for the database

    5. So i create another NFS share /database/ and export this share

    6. Now i have two options to store data on /database/

    7. access the /database/ share from within the W2K8 VM

    8 .create a VMware datastore on the /datastore/, create a new virtual disk of size 500 GB on the newly created datastore and assign this virtual disk to the VM

    Question is which is the preferred, sane and usual option - (7) or (8) ?

    Thx,

    In #3, 4GB for Wink28? Not likely... 40GB, yes (or 30GB)...

    IMO, you're better off having either DAS or NAS/SAN LUNs for ESXi to use as datastores. Then allocate virtual drives from those (keep the VM's files all together, so make sure you size the LUNs correctly)...

    I would suggest reading the docs on ESXi (v5) here: http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

    That will give you better insight into how to configure things. If you're running an older release, then read the documents that apply.



  • 3.  RE: Virtual Disks or NFS mounts

    Posted Oct 28, 2011 07:17 PM

    IMO, you're better off having either DAS or NAS/SAN LUNs for ESXi to use  as datastores. Then allocate virtual drives from those (keep the VM's  files all together, so make sure you size the LUNs correctly)...

    Would creating Virtual drives be the usual/preferred way of assigning storage accessible within a VM. So if i create a mail server vm, all the exchange data will reside on a  virtual disk ?

    In my case DAS is ruled out as i will be trying out more than one physical server and SAN is ruled out as i'm not investing in iSCSI or FC. NAS suits me fine and since its accessible by ESXi i'll probably be sticking to it.

    Thanks for the link. I'm slowly reading the documents one at a time :-)



  • 4.  RE: Virtual Disks or NFS mounts

    Posted Oct 30, 2011 01:05 AM

    It's best to mount the NFS exports to the ESXi hosts, and then create virtual disks. Mounting directly within the guest OS is typically only used when size is an issue (such as the free Windows Server iSCSI initiator for >2TB disks on vSphere 4).

    The reasons I cite for mapping to the host - management of disks is easier (nothing hidden in the guest VM OS) and the VMDK format is purpose built for hosting VM disks.

    Regarding your Exchange example, I believe Microsoft supports only block based disk (FC or iSCSI) for the mailbox databases (other roles and system are fine on NFS). Not that NFS won't work; check with your support agreement.



  • 5.  RE: Virtual Disks or NFS mounts

    Posted Oct 31, 2011 03:29 PM

    iSCSI isn't any more expensive than setting up a NAS... Just using a different protocol. Depending on the SAN/NAS you get it could very well support both iSCSI and NFS protocols. In my experience, the better devices support both, letting YOU decide which to use. Since you'll need the ethernet backbone to work with a NAS (NFS), that's no different.

    The way I've always setup hosts and storage, you create LUNs for the hosts to see, which are then formatted with VMFS, which makes it available to use for VMDK and the VM's. Unless you plan on playing with data dedupe technologies, iSCSI is a very viable option.

    BTW, I've setup vSphere clusters/environments that used iSCSI for all the storage. It's historically been far less expensive than fiber, and when done right, you get great performance from it (also not busting the budget). If you already have decent physical switches on your network, you can do iSCSI easily.

    I have a single host in my home lab right now. But, I have a iSCSI storage device (QNAP TS-559 Pro+), and a HP ProCurve 2510G-24 switch. When I setup more hosts, it will be easy to add the iSCSI LUNs to them. I'm hoping to be able to add either one more host, or change to two new hosts, before the end of this year, or early next year.