VMware vSphere

 View Only
  • 1.  Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 04:49 AM

    So I have an issue that I hope has a quick fix, ha. I usually type a whole bunch of info but I'm going to try and be succinct this time. I just started a job that has a esxi host cluster with 9 hosts. One of those nine host is saying:

    System logs on host hostname.esxi.com are stored on non-persistent storage.

    I read this article and checked all the settings it talks about on each individual host. But they are the same on everyone. Is this a bug or is there a fix? I hate errors, even if they are superficial.

    I appreciate any help anyone can provide.



  • 2.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 06:26 AM

    Configure the  location of the scratch partition and reboot the  host.

    Check VMware Knowledge Base and take take that youre not effected by VMware Knowledge Base

    If you have a Syslog around check that your host is configured for that also.

    Regards,
    Joerg



  • 3.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 10:40 PM

    They are all already set to [] /scratch/log. So I will try a reboot of that cluster, thank you for the help.



  • 4.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 10:54 PM

    This is (sys) log.

    Maybe all the other Hosts suppress the warning except this one? Creating the scratch partition is the solution for your problem... and do it for all nine hosts.

    Regards

    Joerg



  • 5.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 11:44 PM

    This is the current scratchlog settings:

    ScratchConfig.ConfiguredScratchLocation  
    ScratchConfig.CurrentScratchLocation   /tmp/scratch

    I assume you mean change the ScratchConfig.CurrentScratchLocation to something like /vmfs/volumes/5735f199-20b5a43a-0498-a0369fa17999/.locker

    For all of them?



  • 6.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 27, 2020 03:42 AM

    Correct. However folder name should be like this if all hosts logs save in same datastore.

    /vmfs/volumes/5735f199-20b5a43a-0498-a0369fa17999/.locker-hostname



  • 7.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 27, 2020 04:59 AM

    ScratchConfig.CurrentScratchLocation   /tmp/scratch

    Thats the unmodified default value. I assume that on your 8 hosts someone just enable to suppress the warning about the logs on non persistent storage instead of solving the problem.

    Keep in mind than you need a every hosts need it own directory

    mkdir  .locker-hostname1 and .locker-hostname2 and .locker-hostname3 and .locker-hostname4 ....

    Regards,
    Joerg



  • 8.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 29, 2020 04:19 AM

    Thanks for answering all my remedial questions guys, however, I have decided to wait a little while before I start making changes like this. One of the admins I work with has been here 15 years, and has a very different mindset than I do. I won't go in-depth but he has an 80% completion rule, which I'm betting you can guess what that means. After I get a better feel for things I will start doing things like this. I appreciate the help and I will remember to update with my results when I can.

    That said, I have one semi-unrelated question, if this is against the rules, my apologies and don't bother answering. Basically I'm wondering how safe it is to put a single host in a cluster of 9 into maintenance mode (which as I understand it, will then move all the machines to another host since vmotion is on) and then reboot that host. As long as I do it one at a time and give it plenty of time to do its thing, I should be ok correct?



  • 9.  RE: Single host in cluster with error log on non-persistent storage

    Posted Apr 26, 2020 07:13 AM

    You can create scratch partition and reboot the host to get cleared it.

    VMware Knowledge Base