VMware Tanzu Kubernetes Grid Integrated Edition

 View Only

 Onprem installation seems to be having inconsistent host name resolution issues. Is there a way that I can overwride the hostnames

Sarma K's profile image
Sarma K posted Oct 26, 2018 08:56 AM

I am trying to setup greenplum on onprem k8s cluster. I am hitting with an SSH connection issue with the host:

ssh: Could not resolve hostname segment-a-0.agent.default.svc.cluster.local: Temporary failure in name resolution

But the name resolves when I manually tried with ssh and also nslookup.

 

I had to add the following entries into the master-0 node's /etc/hosts

 

10.42.18.14   master-0.agent.default.svc.cluster.local    master-0

10.42.19.12   master-1.agent.default.svc.cluster.local    master-1

10.42.15.12   segment-a-0.agent.default.svc.cluster.local   segment-a-0

10.42.20.12   segment-b-0.agent.default.svc.cluster.local   segment-b-0

 

Without this, the script (/home/gpadmin/tools/wrap_initialize_cluster.bash) is trying to access the nodes using with hostname sometimes and with FQDN sometimes.

Errors something like this,

"stderr='pg_basebackup: could not connect to server: could not translate host name "master-0" to address: Name or service not known"

indicates that there seems to be some inconsistency in the naming.

 

May I know if this is a known issue or if there is a workaround?

 

Thanks

 

 

Mark Nagle's profile image
Broadcom Employee Mark Nagle

Hi Sarma,

 

Kubernetes internal name resolution is done by the kube-dns pod. In PKS this can be found in the kube-system namespace. Run the following to see the current state of the kube dns pod.

kubectl get pods -n=kube-system

Regards,

Mark Nagle

Sarma K's profile image
Sarma K

Hi Mark,

 

kube-dns is all good. I was able to resolve the hostnames of the pods from within all the green plum pods...