Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

In this blog, we will show you the Steps to install kubernetes cluster manually using CENTOS 7.

 

REQUIREMENTS

  • 3 Virtual Machine with an internet connection.
  • Kubenetes components

 

OVERVIEW

 

INFRASTRUCTURE OVERVIEW

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We are creating two node cluster for this demo.
  • The Master IP will be 192.168.3.81.
  • Node 1 IP will be 192.68.3.82 and Node 2 IP will be 192.168.3.83.
  • We are using the Flannel Network for the POD communication in this demo.

 

Note: The VM IP’s may change based on your environment

 

MASTER SERVER CONFIGURATION

  • Log in to the master server and we have already set the hostname as k8s-master during OS installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Disable the SELinux using the below commands.

exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Open the sysctl.conf file.

vi /etc/sysctl.conf

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries in the conf file to change the Linux host bridge values and save the changes.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Steps to install kubernetes cluster manually using CENTOS 7

 

  • open the fstab file.

vi /etc/fstab

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Disable the SWAP by adding the # symbol at the beginning and save the changes.

Steps to install kubernetes cluster manually using CENTOS 7

.

  • Restart the VM using the command init 6 to apply the SELinux and SWAP changes.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Once the VM is back to online, open the host file.

vi /etc/hosts

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries in the host file and save the changes.

192.168.3.81 k8s-master
192.168.3.82 k8s-node1
192.168.3.83 k8s-node2

Steps to install kubernetes cluster manually using CENTOS 7

 

Note: The IP’s may change based on your environment

 

  • Create a new file named kubernetes.repo under yum.repos.d folder using the below command.

vi /etc/yum.repos.d/kubernetes.repo

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries and save the changes.

[kubernetes]
name=Kubernetes
baseurl=
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64


enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=
https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Steps to install kubernetes cluster manually using CENTOS 7

 

INSTALLING DOCKER AND KUBEADM

  • Now, Install the docker and kubeadm using the below command.

yum install -y kubeadm docker

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Enable and start the docker and kubelet services using commands.

systemctl enable docker && systemctl enable kubelet
systemctl start docker && systemctl start kubelet

 

Steps to install kubernetes cluster manually using CENTOS 7

 

NODE(S) CONFIGURATION

  • We have already set the hostname for node1 and node2 during OS installation.

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Please follow the above master server configuration steps for the kubernetes node preparation. We have already installed the docker and kubeadm on both the nodes.

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

CREATING CLUSTER

  • We are using the Flannel network for this demo.
  • To make the flannel to working properly, we need to specify the network CIDR while configuring the cluster.  Use the below command to create a cluster.

kubeadm init --pod-network-cidr=10.244.0.0/16

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the configuration process.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Kubernetes cluster has configured successfully.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Execute the below commands below start using the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Also, make a note of the kubeadm join command to add the nodes into the cluster.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can list the available system PODS using the below command.

kubectl get pods --all-namespaces

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Kube-DNS will be in pending state until we install the POD network for the cluster.

 

INSTALLING NETWORK

  • Use the below command to install the Flannel network.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • The flannel pod will start initiating and it will take few minutes to complete the configuration process.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can get the detailed information about a POD using describe option.

kubectl describe pod <pod name> --namespace=kube-system

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

  • After some time, all the system pods are in running state.

Steps to install kubernetes cluster manually using CENTOS 7

 

ADDING NODES TO THE CLUSTER

  • From the node1 use the kubeadm join command to add the nodes into the cluster.

kubeadm join --token 878914.4587d610b5478141 192.168.3.81:6443 --discovery-token-ca-cert-hash sha256:6a23a6a7be997625f53e564a8b510627d035b000f9c288f7487fc9415c3338f1

 

Note: Token ID,IP and sha256 details will vary based on your environment

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Node has been joined into the cluster successfully.

Steps to install kubernetes cluster manually using CENTOS 7

 

VERIFICATION

  • From the master server, type the below command to list the available nodes in the cluster.

kubectl get nodes

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can use the same kubeadm join command to add more nodes in future.

 

TIP

  • In case if you forget to make a note of kubeadm join information, use the below command from the master server to retrieve the join information.

kubeadm token create --print-join-command

Steps to install kubernetes cluster manually using CENTOS 7

 

EXTERNAL LINKS

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

VIDEO

Thanks for reading this blog. We hope it was useful for you to learn about installing kubernetes manually on CENTOS 7.



Author: Loges
Logeswaran holds Microsoft certified engineer & solution architect certifications with over 11+ years of experience in the fields of hosting technologies and IMS/Cloud consulting. At AssistanZ, Logeswaran spearheads the strategic planning and execution of the company’s Microsoft based core technologies to Enterprise clients.

6 Comments

  • dedy efendi

    Thanks for tutorial

    when i’m execute kubeadm init –pod-network-cidr=10.244.0.0/16
    i got error message
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.187815 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list statefulsets.apps at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.188153 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list poddisruptionbudgets.policy at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.189849 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list persistentvolumes at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.191708 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User “system:kube-scheduler” cannot list replicasets.extensions at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.195605 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list pods at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.198239 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list services at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.199264 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list nodes at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.200382 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list persistentvolumeclaims at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.206266 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list replicationcontrollers at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.206417 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list storageclasses.storage.k8s.io at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.190241 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list statefulsets.apps at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.191156 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list poddisruptionbudgets.policy at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.194226 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list persistentvolumes at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.195848 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User “system:kube-scheduler” cannot list replicasets.extensions at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.197795 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list pods at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.201109 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list services at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.202618 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list nodes at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.202839 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list persistentvolumeclaims at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.208671 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list replicationcontrollers at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.211041 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list storageclasses.storage.k8s.io at the cluster scope
    Apr 10 20:08:47 k8s-master journal: I0410 13:08:47.792028 1 trace.go:76] Trace[248092992]: “Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests” (started: 2018-04-10 13:08:37.791253589 +0000 UTC m=+7.170187464) (total time: 10.00073808s):
    Apr 10 20:08:47 k8s-master kubelet: E0410 20:08:47.792745 1389 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io “csr-” is forbidden: not yet ready to handle request

  • Hi Dedy,

    Thanks for your comments.

    Which version of kubernetes has been installed in your environment?

    We have tested in kubernetes 1.9.3 and its working fine.

    Also, please check this link – https://github.com/kubernetes/kubeadm/issues/217

    Regards,
    Loges

  • if we have multiple node with different application , in this case how we will done the deployment , and how the master node will understand which node need to deploy

  • Hi Shufil,

    Thanks for your comment. We can deploy the application using the labels. Please check our other blogs to know about the labels functionality in kubernetes environment.

    Regards,
    Loges

  • Sai

    Hi,
    I am getting this error. Please help me out!
    Events:
    Type Reason Age From Message
    —- —— —- —- ——-
    Warning FailedScheduling 4m (x12 over 6m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
    Normal Scheduled 4m default-scheduler Successfully assigned kube-dns-86f4d74b45-thz7h to master
    Normal SuccessfulMountVolume 4m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config”
    Normal SuccessfulMountVolume 4m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-xxl6s”
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8”
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8”
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8”
    Normal Created 4m kubelet, master Created container
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8”
    Normal Started 4m kubelet, master Started container
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8”
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8”
    Normal Created 4m kubelet, master Created container
    Normal Started 4m kubelet, master Started container
    Normal Started 3m (x2 over 4m) kubelet, master Started container
    Normal Created 3m (x2 over 4m) kubelet, master Created container
    Normal Pulled 3m kubelet, master Container image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8” already present on machine
    Warning Unhealthy 2m (x6 over 3m) kubelet, master Readiness probe failed: Get http://10.244.0.2:8081/readiness: dial tcp 10.244.0.2:8081: getsockopt: connection refused
    Warning Unhealthy 2m (x2 over 3m) kubelet, master Liveness probe failed: HTTP probe failed with statuscode: 503

  • sujai

    hi Loges,

    How to assing a external IP to an existing network using ingress.

    Thanks,
    sujai.K

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.