View Only
  • 1.  Oracle VMware performance vrs Physical

    Posted Aug 23, 2017 08:44 PM

    I have a question about the
    performance of running an Oracle database on a physical server verses a VM
    server.  My DBA want a physical server as
    he feels performance would be much better, especially with disks holding lots
    and lots of files.  He wants to use iSCSI
    to LUNS on a NetApp FAS8020 instead of VMDKs on a NFS data store. My feeling is
    the performance difference would be negligible.
    The network connection to the NetApp in either option would be 10GB.  I am looking for opinions for the two
    solutions.  If he would see better
    performance with LUNs over NFS or they would be about the same. I feel making
    it a VM guest with allocated resources would offer a better solution for many
    reasons. Part of his concern is the Linux CheckDisk funcition taking too long
    to run with NFS (10TB in 15 disks) taking over a day to run on all disks, he
    thinks LUNs would be quicker.

    My recommended solution.

    I talked with my NetApp
    engineer, about creating the storage my DBA wants for his new server.  My DBA asked for 10 disks to be created. Five
    of them on the NetApp SAS disks, 1 TB each for a total of 5 TB of storage.  Five of them on the NetApp SATA disks, 1 TB
    each in size for a total of 5 TB of storage.
    He also wanted to know if we be able to detach the disk from his new
    server and reattach them to either of the two existing VM guest v00hubdb-clone
    & v00hubdb-clone2 or vice versa.

    If we used the s00Hubdb-new
    as a physical server moving the disks between it and a different physical
    server or virtual server and vice versa, as the DBA wants to be able to do,
    would be more problematical, at the file structure of the disks are
    different.  If we made it a VM guest,
    moving the disk from one guest to another guest would be easy to
    accomplish.  So if moving the disks
    between servers is important having them all being VM guests, is the preferred

    We talked about having the
    S00HUBDB-New as a physical server verses a VM Guest.  If we took the server the DBA just finished
    with and used the VMware converter to do a P-to-V (physical to virtual)
    conversion it would make life and utility of this server much easier.  So if we used the VMware Converter, the DBA
    would not have to recreate the server.  We
    would convert it to a virtual, store it on the NetApp, then install VMware on
    the physical server add/regester the server (VM guest) to the new host, create
    and attach the drives, and it would be ready to go.  We could reserve or allocate the resources our
    DBA needs for that specific guest so it always has the resources he wants/needs
    for it.  As performance seems to be his
    biggest concern in wanting it to be a physical box.

    If it was virtual we would be
    able to move it to a different host if the physical host needed maintenance or
    failed.  We could still keep it running
    with a hit in performance, while the other host was being taken care of.  We would also gain all the normal VMware
    advantages also.

    In talking with my NetApp
    engineer we could create either NFS shares or LUNs for this new physical server
    and the difference in performance would be negligible.  However if we converted it to a VM and used
    the NFS shares we already have it would make moving the disks around so much
    easier. As we could just detach them from one server move the VMDK file the new
    server’s folder then reattach them to the new server.  We could use the same process to move them
    back if needed.

    My DBA responded to the above proposal with
    the following and my NA engineer’s response in in italics.

    I do not want to convert this into a VM host. I do not want to use
    vmdk files over an NFS link.

    I want to move away from that method all together for certain Large
    volume drive mappings (drives with millions of files)

    I need the NetApp devices to be able to be added to the s00hubdb
    physical box via the iSCSI hard disk setup.

    Yes we can do this. We
    can create LUNs on NetApp and present them to a physical host or virtual guest
    host via iSCSI.

    Lets just proceed with getting the 10 - 1 TB drives add. Thanks.

    The other question I had about possibly moving one of those drives
    at a later point back to a current VM, ... is not a need, but rather a question
    I had about what was possible. The VMs have a full Linux install with the
    ability of having iSCSI attachments as well. From a technical standpoint this
    should be possible, but the VM guest would need access to the 10gb either. But
    this is not a need right now. It was an idea I had about giving back some of
    the space in use on v00hubdb-clone and clone2 at a later time, without having
    to spend tons of hours re-organizing files later on down the road. Its not
    worth wasting any more time on at this point. I will find another solution to
    this later on.


    If I’m understanding this
    correctly we would have a LUN mapped to a particular Linux host “A”. Then later
    you may want to move this LUN to Linux host “B”. We can do this. You can
    dismount from Linux host “A” and then we unmap to Linux host “A” and map LUN to
    Linux host “B”.

    We can also clone a LUN from
    Linux host “A” and then map the clone to Linux host “B”.

    You Linux VM would just need
    to have an virtual adapter installed on the VM that is connected to iSCSI
    vlan.210. Currently the FAS8020 has two iSCSI LIFs (one on each node) that are
    on vlan 210.

  • 2.  RE: Oracle VMware performance vrs Physical

    Posted Sep 14, 2017 06:37 PM


    My reply is slightly off topic but good information to assist in your decision.

    Oracle licensing is such that if you use a virtual machine that is part of a cluster (or stand alone ESXi host) each 2 cores requires an Oracle per CPU license.

    This is not just for the cores/CPU's an Oracle virtual server has been allocated but all core's on the ESXi host or cluster of hosts is subject to the same licensing.

    i.e - if your ESXi host has 4 CPU's with 8 cores each you will require 16 Oracle CPU licenses even if the Oracle VM is only allocated 8 cores. If the ESXi host is part of a cluster then unless you restrict the Oracle VM to one ESXi host you would have to license all cores in the cluster for Oracle.

    I know this is not what you want to hear but having been through this numerous times I usually recommend a physical server with 2 CPU's with 4 cores each to minimize Oracle licensing costs.