VMware vSphere

 View Only
  • 1.  Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Mar 23, 2022 06:02 PM

    I get the following error when trying to log in using kubectl-vsphere to work with the cluster. time="2022-03-23T11:44:18-05:00" level=fatal msg="Failed to get available workloads: invalid character '<' looking for beginning of value"

    I'm using kubectl-vsphere login --vsphere-username Administrator@vsphere.local --server=https://192.168.1.210 --insecure-skip-tls-verify to log in. I am at a loss as to where to start troubleshooting or which log file to look in for more info. I would appreciate any suggestions. Thanks.

     



  • 2.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Apr 01, 2022 04:10 PM

    tracking.  I'm getting the same error



  • 3.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Apr 13, 2022 03:53 PM

    tracking. I am getting the same error too.



  • 4.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Apr 14, 2022 05:38 PM

    I wanted to share that we identified the issue for us and resolved it. I suspect our issue will not be the same as yours but maybe helpful.

    The error being given is generic in which basically authentication failed to get back what it expected. In our case we had retired two domain controllers but never removed them from our domain dns. When VMware sent to AD to authenticate it would work on a live IP but give the error we are reporting on a dead IP. 

    The solution for us was to clean up our DNS and the issue resolved. 

    In testing we verified the issue was the same on each supervisor node local authentication always worked, domain authentication sometimes worked and sometimes failed. 

    While I was on the call the technician said he also helped another user who was reporting the same issue. In that users case he had a group which he was part of that had multiple @ signs in the name like a distribution list. This also caused the authentication to fail and spit the error.

    hope this helps you identify the root cause. 



  • 5.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Apr 14, 2022 05:44 PM

    my issue is that I have an underlying NSX-T issue this is breaking DNS. 



  • 6.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Apr 14, 2022 05:51 PM

    Just going to put this out there incase you have not checked. When I first built out tanzu from vsphere it successfully deployed, however, I had multiple strange issues. These issues were resolved by making sure jumbo frames end to end were setup. Could be completely unrelated but nsx-t requires packet sizes of 1600 slightly larger than a normal package.

     

    anyways I was surprised I could successfully deploy without jumbo frames and it would kind of work most of the time. 



  • 7.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted May 19, 2022 07:22 AM

    Hi,

    in my case the solution was to redeploy the supervisor cluster WITHOUT the optional Workload DNS entry, which I set to the general DNS in our environment.


    I'm still not sure how this DNS should work anyway because there are no Gateway settings or similar, so i removed this DNS and login works fine now. Seems like the cluster is trying to reverse lookup the login and as it cannot reach the DNS, it will fail.



  • 8.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted May 19, 2022 11:38 AM

    Thanks for the tip!



  • 9.  RE: Error while logging in to kubernetes cluster using on of the VIPs assigned for working with cluster

    Posted Nov 01, 2023 10:20 PM

    Hi

    Always check if  there any  pods  not running  inside the control plan vm 

     

    kubectl get po -n kube-system

    for  my cas it was a problem Code DNS PODs  because there was a problem in  DNS routing between the workload network and the DNS Server after fixing  it the PODS  are becoming up