ESXi

 View Only
  • 1.  (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.

    Posted May 02, 2023 10:54 PM

    So just to preface, I dont l want to intrude.My data is not vauable this is purley a learning experience.

     

    I work IT and have SOME experience, but I am very much and the begining of Vmware journy. As I am starting to roll out VM deploys a, I like to break my labs and restore them.. so I can fix issues for my clients. Its the only way I know how to learn

     

    Ive run into an issue which I am unable to solve:

     

    I have a proliant dl380 gen 7 running RAID 5 on a P410i RAID controller. The disks are all 900gb HP 2.5inch SAS drives (not that it matters).

     

    I decided to move these drives and replace them with SSD's in order to pass them through the controller, apparrently this is possible only if I wipe the drives (hba mode not available) - when I went to return the SAS drives, I found I had forgotten the order.

     

    I did not think much of that, and when prompted, I launched the array utlity iso and managed somewhat to get the original RAID 5 to be shown.

     

    I was not prompted to rebuild the array however it did tell me that I may incur some dataloss as the drives where not in the exact order? I cant remembe the exact errorr to be honest.

     

    Launching into ESXI 7, I found the datastores where not seen, and I saw a new error about the scratch folder, something about it not being configured (I wish I could remember).

     

    So, I thought it would be best to reinstall, I wiped the SD card and upgraded to ESXI 8.

     

    I have found the VMFS partition, and there is reference to the "Main volume" when running:

     

    [root@localhost:/dev/disks] offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4
    -s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done
    /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0
    gpt
    3740 255 63 60088320
    1 64 204863 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
    5 208896 2306047 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
    6 2308096 4405247 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
    7 4407296 60088286 4EB2EA3978554790A79EFAE495E21F8D vmfsl 0
    ---------------------
    /vmfs/devices/disks/naa.600508b1001c1bf8a053590733375ffb
    gpt
    656623 255 63 10548655152
    1 2048 10548652032 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
    Checking offset found at 2048:
    0200000 d00d c001
    0200004
    1400000 f15e 2fab
    1400004
    0140001d 4d 61 69 6e 20 56 6f 6c 75 6d 65 00 00 00 00 00 |Main Volume.....|
    0140002d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
    ---------------------
    /vmfs/devices/disks/naa.600508b1001c2863757871ad9c529fbf
    unknown
    486397 255 63 7813971632
    ---------------------

    My understanding is, the parition looks healthy? Though, I am still unable to find the datastores.

     

    Looking at the raw data, it seems to be intact, at the very least I can parse bits and peices of information like this:

     

    Sorry for the spam, but gives you an idea of what I mean:

     

    ����:���:m���c8��D�:�<�~ `VaE�UaE�U@E�U�VdmdX����:���� �lP��UP��U��U�dmdf��>
    ���:����NJ`�NJ`�NJ`��dmdf�� ��:��R�`PQU�D�:�<�q@�Z�NJ`�NJ`�NJ`�Zdmdf��`�h�p�x������������������������������������ �(�0�8�@�H�P�`�h�p�x������������������������������������ �(�0�8�@�H�P�X�`�h�p�x������������������������������������ �(�0�@��:��n �:�R�`PQU�D�:�<�� �`
    ��U��U&��U�`
    dmd��`��:�k
    ����d��d��d�dmdf �����:�} by �/�6/d�6/d�6/d�dmdf �
    ����:��%@� t��Ut��Ut��U�dmdf@GI����:��Z&���K��UK��UX��U��dmdf�.
    | - .sdd.sf
    | - Server 2022
    | - ISO
    | - .locker
    | | - cache
    | | | - loadESX
    | | - tmp
    | | - core
    | | - var
    | | | - tmp
    | | - vmware
    | | | - lifecycle
    | | - downloads
    | | - log
    | | - store
    | | - locker
    | | | - packages
    | | | | - var
    | | | | | - db
    | | | | | | - locker
    | | | | | | | - addons
    | | | | | | | - vibs
    | | | | | | | - reservedVibs
    | | | | | | | - bulletins
    | | | | | | | - solutions
    | | | | | | | - baseimages
    | | | | | | | - manifests
    | | | | | | | - reservedComponents
    | | | | | | | - profiles
    | | - vdtc
    | | - healthd
    | - vmkdump
    | - *redacted* 2.0
    | - VPN Server
    | - ezpz
    | - Docker Experiments
    | - macback
    | - pfSense
    | - testing
    | - Backup's
    | - webserver
    | - RC test
    | - mac
    | - Proxmox Backup
    | - mac2
    | - macOS
    | - DNS
    | - PROXMOX BACKUP SERVER
    | - Windows 10 Enterprise
    | - VPN client
    | - omv
    ����:���8���c��c��c�dmdf ����:�J $r���U���U��U�dmdfD
    D
    >
    � ��:�. 7@ 9 7>dy�,d 7>d�9dmdf�1�1 `��(� � � �� � � ������������������( �0 �8 �@ �H �P �X �` �h �p �x �� �� �� �� �� �� �� �� �� �� �� �� �� �� �� �




    �(

    I hope I'm missing something really dumb, like a command to force scan for datastores..

     

    things that I've tried:

     

    • mounting via vmfs6-fuse with live debian 11 environment - it complains about a bad magic number which is far above my paygrade.

     

    •  ran testdisk, got to 1% after an hour and decided my time is better spent posting this

     

    More than happy to try anything, keen to document my findings even If I fail - for myself and those learning.

     

    Pls ignore if people actuallyneed help this is not urgent at all

     

    Cheers goodnight -

     

    Nathan

     

     

     



  • 2.  RE: (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.

    Posted May 03, 2023 01:34 PM

    I am seeing a segmentation fault, when running voma.

     

    any ideas?

     

    [root@esxi:~] voma -m vmfs avfix -d /dev/disks/naa.600508b1001c1bf8a053590733375
    ffb
    Running VMFS Checker version 2.1 in default mode
    Initializing LVM metadata, Basic Checks will be done
    Detected valid GPT signatures
    Number Start End Type
    1 2048 10548652032 vmfs
    Initializing LVM metadata..\Segmentation fault

     



  • 3.  RE: (Not ugrent, just a home lab!) Raid 5 rebuild, VMFS datastores lost.

    Posted May 08, 2023 08:09 PM

    5 days later, any ideas?

     

    If I didnt hear back by tomrrow night Ill have to go ahead and wipe.

     

    Thanks