×
Kube-install
Blog Cloud Computing Containers

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

In this blog, we will show you the Steps to install kubernetes cluster manually using CENTOS 7.

 

REQUIREMENTS

  • 3 Virtual Machine with an internet connection.
  • Kubenetes components

 

OVERVIEW

 

INFRASTRUCTURE OVERVIEW

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We are creating two node cluster for this demo.
  • The Master IP will be 192.168.3.81.
  • Node 1 IP will be 192.68.3.82 and Node 2 IP will be 192.168.3.83.
  • We are using the Flannel Network for the POD communication in this demo.

 

Note: The VM IP’s may change based on your environment

 

MASTER SERVER CONFIGURATION

  • Log in to the master server and we have already set the hostname as k8s-master during OS installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Disable the SELinux using the below commands.

exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Open the sysctl.conf file.

vi /etc/sysctl.conf

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries in the conf file to change the Linux host bridge values and save the changes.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Steps to install kubernetes cluster manually using CENTOS 7

 

  • open the fstab file.

vi /etc/fstab

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Disable the SWAP by adding the # symbol at the beginning and save the changes.

Steps to install kubernetes cluster manually using CENTOS 7

.

  • Restart the VM using the command init 6 to apply the SELinux and SWAP changes.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Once the VM is back to online, open the host file.

vi /etc/hosts

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries in the host file and save the changes.

192.168.3.81 k8s-master
192.168.3.82 k8s-node1
192.168.3.83 k8s-node2

Steps to install kubernetes cluster manually using CENTOS 7

 

Note: The IP’s may change based on your environment

 

  • Create a new file named kubernetes.repo under yum.repos.d folder using the below command.

vi /etc/yum.repos.d/kubernetes.repo

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Add the below entries and save the changes.

[kubernetes]
name=Kubernetes
baseurl=
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64


enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=
https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Steps to install kubernetes cluster manually using CENTOS 7

 

INSTALLING DOCKER AND KUBEADM

  • Now, Install the docker and kubeadm using the below command.

yum install -y kubeadm docker

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Enable and start the docker and kubelet services using commands.

systemctl enable docker && systemctl enable kubelet
systemctl start docker && systemctl start kubelet

 

Steps to install kubernetes cluster manually using CENTOS 7

 

NODE(S) CONFIGURATION

  • We have already set the hostname for node1 and node2 during OS installation.

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Please follow the above master server configuration steps for the kubernetes node preparation. We have already installed the docker and kubeadm on both the nodes.

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

CREATING CLUSTER

  • We are using the Flannel network for this demo.
  • To make the flannel to working properly, we need to specify the network CIDR while configuring the cluster.  Use the below command to create a cluster.

kubeadm init --pod-network-cidr=10.244.0.0/16

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the configuration process.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Kubernetes cluster has configured successfully.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Execute the below commands below start using the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Also, make a note of the kubeadm join command to add the nodes into the cluster.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can list the available system PODS using the below command.

kubectl get pods --all-namespaces

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Kube-DNS will be in pending state until we install the POD network for the cluster.

 

INSTALLING NETWORK

  • Use the below command to install the Flannel network.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

Steps to install kubernetes cluster manually using CENTOS 7

 

  • It will take few minutes to complete the installation.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • The flannel pod will start initiating and it will take few minutes to complete the configuration process.

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can get the detailed information about a POD using describe option.

kubectl describe pod <pod name> --namespace=kube-system

Steps to install kubernetes cluster manually using CENTOS 7

Steps to install kubernetes cluster manually using CENTOS 7

 

  • After some time, all the system pods are in running state.

Steps to install kubernetes cluster manually using CENTOS 7

 

ADDING NODES TO THE CLUSTER

  • From the node1 use the kubeadm join command to add the nodes into the cluster.

kubeadm join --token 878914.4587d610b5478141 192.168.3.81:6443 --discovery-token-ca-cert-hash sha256:6a23a6a7be997625f53e564a8b510627d035b000f9c288f7487fc9415c3338f1

 

Note: Token ID,IP and sha256 details will vary based on your environment

Steps to install kubernetes cluster manually using CENTOS 7

 

  • Node has been joined into the cluster successfully.

Steps to install kubernetes cluster manually using CENTOS 7

 

VERIFICATION

  • From the master server, type the below command to list the available nodes in the cluster.

kubectl get nodes

Steps to install kubernetes cluster manually using CENTOS 7

 

  • We can use the same kubeadm join command to add more nodes in future.

 

TIP

  • In case if you forget to make a note of kubeadm join information, use the below command from the master server to retrieve the join information.

kubeadm token create --print-join-command

Steps to install kubernetes cluster manually using CENTOS 7

 

EXTERNAL LINKS

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

VIDEO

Thanks for reading this blog. We hope it was useful for you to learn about installing kubernetes manually on CENTOS 7.

15 thoughts on “Steps to install kubernetes cluster manually using CENTOS 7”

  1. Thanks for tutorial

    when i’m execute kubeadm init –pod-network-cidr=10.244.0.0/16
    i got error message
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.187815 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list statefulsets.apps at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.188153 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list poddisruptionbudgets.policy at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.189849 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list persistentvolumes at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.191708 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User “system:kube-scheduler” cannot list replicasets.extensions at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.195605 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list pods at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.198239 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list services at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.199264 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list nodes at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.200382 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list persistentvolumeclaims at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.206266 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list replicationcontrollers at the cluster scope
    Apr 10 20:08:46 k8s-master journal: E0410 13:08:46.206417 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list storageclasses.storage.k8s.io at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.190241 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list statefulsets.apps at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.191156 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list poddisruptionbudgets.policy at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.194226 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list persistentvolumes at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.195848 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User “system:kube-scheduler” cannot list replicasets.extensions at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.197795 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list pods at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.201109 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list services at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.202618 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list nodes at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.202839 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list persistentvolumeclaims at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.208671 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list replicationcontrollers at the cluster scope
    Apr 10 20:08:47 k8s-master journal: E0410 13:08:47.211041 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list storageclasses.storage.k8s.io at the cluster scope
    Apr 10 20:08:47 k8s-master journal: I0410 13:08:47.792028 1 trace.go:76] Trace[248092992]: “Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests” (started: 2018-04-10 13:08:37.791253589 +0000 UTC m=+7.170187464) (total time: 10.00073808s):
    Apr 10 20:08:47 k8s-master kubelet: E0410 20:08:47.792745 1389 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io “csr-” is forbidden: not yet ready to handle request

  2. if we have multiple node with different application , in this case how we will done the deployment , and how the master node will understand which node need to deploy

  3. Hi Shufil,

    Thanks for your comment. We can deploy the application using the labels. Please check our other blogs to know about the labels functionality in kubernetes environment.

    Regards,
    Loges

  4. Hi,
    I am getting this error. Please help me out!
    Events:
    Type Reason Age From Message
    —- —— —- —- ——-
    Warning FailedScheduling 4m (x12 over 6m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
    Normal Scheduled 4m default-scheduler Successfully assigned kube-dns-86f4d74b45-thz7h to master
    Normal SuccessfulMountVolume 4m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config”
    Normal SuccessfulMountVolume 4m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-xxl6s”
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8”
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8”
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8”
    Normal Created 4m kubelet, master Created container
    Normal Pulling 4m kubelet, master pulling image “k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8”
    Normal Started 4m kubelet, master Started container
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8”
    Normal Pulled 4m kubelet, master Successfully pulled image “k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8”
    Normal Created 4m kubelet, master Created container
    Normal Started 4m kubelet, master Started container
    Normal Started 3m (x2 over 4m) kubelet, master Started container
    Normal Created 3m (x2 over 4m) kubelet, master Created container
    Normal Pulled 3m kubelet, master Container image “k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8” already present on machine
    Warning Unhealthy 2m (x6 over 3m) kubelet, master Readiness probe failed: Get http://10.244.0.2:8081/readiness: dial tcp 10.244.0.2:8081: getsockopt: connection refused
    Warning Unhealthy 2m (x2 over 3m) kubelet, master Liveness probe failed: HTTP probe failed with statuscode: 503

  5. hi dear friend
    thank you for your tutorial
    i have a problem
    i am create CLUSTER on “”kubeadm init –pod-network-cidr=10.244.0.0/16″” or my favorite cluster ip for example “”kubeadm init –pod-network-cidr=192.168.6.0/24″” my master ip (192.168.6.150 255.255.255.0)
    and successfully but when i INSTALLING NETWORK “” kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml “” output message is 5 succesfully flannel object create but when i use “” kubectl get pods –all-namespaces “” flannel not create and master is not ready
    thank you

  6. kubectl describe pod kube-flannel-ds-6thpl –namespace=kube-system
    Name: kube-flannel-ds-6thpl
    Namespace: kube-system
    Priority: 0
    PriorityClassName:
    Node:
    Labels: app=flannel
    controller-revision-hash=5b7d8d548d
    pod-template-generation=1
    tier=node
    Annotations:
    Status: Pending
    IP:
    Controlled By: DaemonSet/kube-flannel-ds
    Init Containers:
    install-cni:
    Image: quay.io/coreos/flannel:v0.9.1-amd64
    Port:
    Host Port:
    Command:
    cp
    Args:
    -f
    /etc/kube-flannel/cni-conf.json
    /etc/cni/net.d/10-flannel.conf
    Environment:
    Mounts:
    /etc/cni/net.d from cni (rw)
    /etc/kube-flannel/ from flannel-cfg (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-kl9hm (ro)
    Containers:
    kube-flannel:
    Image: quay.io/coreos/flannel:v0.9.1-amd64
    Port:
    Host Port:
    Command:
    /opt/bin/flanneld
    –ip-masq
    –kube-subnet-mgr
    Environment:
    POD_NAME: kube-flannel-ds-6thpl (v1:metadata.name)
    POD_NAMESPACE: kube-system (v1:metadata.namespace)
    Mounts:
    /etc/kube-flannel/ from flannel-cfg (rw)
    /run from run (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-kl9hm (ro)
    Conditions:
    Type Status
    PodScheduled False
    Volumes:
    run:
    Type: HostPath (bare host directory volume)
    Path: /run
    HostPathType:
    cni:
    Type: HostPath (bare host directory volume)
    Path: /etc/cni/net.d
    HostPathType:
    flannel-cfg:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: kube-flannel-cfg
    Optional: false
    flannel-token-kl9hm:
    Type: Secret (a volume populated by a Secret)
    SecretName: flannel-token-kl9hm
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: beta.kubernetes.io/arch=amd64
    Tolerations: node-role.kubernetes.io/master:NoSchedule
    node.kubernetes.io/disk-pressure:NoSchedule
    node.kubernetes.io/memory-pressure:NoSchedule
    node.kubernetes.io/network-unavailable:NoSchedule
    node.kubernetes.io/not-ready:NoExecute
    node.kubernetes.io/unreachable:NoExecute
    node.kubernetes.io/unschedulable:NoSchedule
    Events:
    Type Reason Age From Message
    —- —— —- —- ——-
    Warning FailedScheduling 1s (x24 over 110s) default-scheduler 0/2 nodes are available: 1 node(s) didn’t match node selector, 1 node(s) had taints that the pod didn’t tolerate.

  7. Hi,

    I am unable to start the kubelet service .
    Below are the logs. Please help me out.
    ————————————————————–
    [root@kubernetes ~]# systemctl status kubelet.service
    ● kubelet.service – kubelet: The Kubernetes Node Agent
    Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: activating (auto-restart) (Result: exit-code) since Sun 2019-03-24 00:00:44 IST; 9s ago
    Docs: https://kubernetes.io/docs/
    Process: 3104 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
    Main PID: 3104 (code=exited, status=255)

    Mar 24 00:00:44 kubernetes.example.com systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Mar 24 00:00:44 kubernetes.example.com systemd[1]: Unit kubelet.service entered failed state.
    Mar 24 00:00:44 kubernetes.example.com systemd[1]: kubelet.service failed.
    —————————————————————————————-

  8. Hi,
    I am getting an error that the nodes are not adding. It throws the error “Failed to request cluster info will try again connection refused”

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.