It sounds like you're running a lab with a host that is just enough host to run the single VM. So it's understandable that you would want to try to maximize the use of resources in this way. As a general rule, the vSphere product is engineered mostly for production environments. Obviously, labs account for a large portion of the product install base, but even in lab scenarios a lot of customers use the infrastructure to host workloads for other end-customers (engineers, devs, etc.). In these cases, an outage still causes disruption and therefore it's usually important for labs to be resilient.
So, for that reason, the current best practice guidance is to virtualize the vCenter server and let it manage its own host that is virtualizing it. Way way back, this was not necessarily the case, and the only build option was Windows, so a lot of people did dedicate hardware to this task. But one of the main reasons to put vCenter in the environment as a VM is to let the infrastructure bring the same resiliency benefits to the hosting infrastructure as it does to the end-user virtual machine workloads. It's also generally a best practice to separate the hosting infrastructure on to a 3+ host cluster separate from workloads, although it is not necessary and a lot of users don't do this for lack of resources.
Other reasons to virtualize workloads that are big enough to consume entire hosts (these do exist and a lot of customers have them) are:
- Consistency - the same environment applies to all workloads; the same management tools can be used across all
- Portability - the VM can be ported to many different environments or locations without imaging software
- Mobility - the VM can be migrated to another host in the event of a host hardware failure with little to no recovery effort, especially if it's moved proactively before the failure occurs. Shared storage is necessary for this
- Abstraction from hardware - removes the dependency on the hardware that happens when loading an OS on baremetal
Not all of those apply in a small lab environment, but those are reasons people try to virtualize everything and why the vendor delivers the vCenter appliance the way they do. Unfortunately, there are no installable bits for Linux platforms, only for Windows. I would expect the Windows edition to go away in some future version. The move to the virtual appliance was made in order appease Linux users who hate Windows, customers who want to avoid Windows licensing fees (as well as SQL Server/Oracle licensing), and to simplify the deployment as much as possible.
I would recommend to just virtualize it for these reasons. The ESXi host itself doesn't induce much overhead. It's up to you whether to still use the host or to just put it in the general population with the rest of the VMs, but I wouldn't bother trying to cluster that machine.
I would also recommend having some shared storage available in case you do decide you want to vMotion something off of its dedicated host. It could just be swing space on an NFS server hosted as a VM on the large server. This could give you an exit strategy to vacate one of those other hosts in case the need arises.