InfraPlatform

kubernetis 설치 Centos 8

IT오이시이 2020. 6. 21. 22:05
728x90

 

Step 1: Prepare Hostname, Firewall and SELinux

192.168.0.23  master.im.com  master-node
192.168.0.26  work1.im.com   work-node1
192.168.0.27  work2.im.com   work-node2

 

[ master-node]

# firewall-cmd --permanent --add-port=6443/tcp
# firewall-cmd --permanent --add-port=2379-2380/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10251/tcp
# firewall-cmd --permanent --add-port=10252/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd –reload
# modprobe br_netfilter
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

[ work-node1,2]  

sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd –-reload

 

[selinux disable]

# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# reboot

 

Step 2: Setup the Kubernetes Repo

 

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

 

Step 3: Install Kubeadm and Docker, cri-o

sudo yum install -y kubelet kubeadm kubectl

 

systemctl enable kubelet
systemctl start kubelet

 

yum list docker-ce --showduplicates | sort -r

yum install containerd.io -y


* yum install podman-docker containerd.io
* podman-docker는 안깔아도 될듯

 

docker를 수동으로 설치하는 방법
 * Centos8과 버전 호환이 잘 안되어 간단히 설치가 되지 않는다.

yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm -y
yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.6-3.el7.x86_64.rpm -y
yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-19.03.6-3.el7.x86_64.rpm -y
yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm -y

 

# systemctl enable kubelet
# systemctl start kubelet
# systemctl enable docker
# systemctl start docker

 cri-o  (container runtime interface) 설치

Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes.

curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:1.18.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/CentOS_7/devel:kubic:libcontainers:stable:cri-o:1.18.repo


yum install cri-o -y
or
dnf install cri-o -y

https://github.com/cri-o/cri-o Compatibility matrix: CRI-O ⬄ Kubernetes

 

Step 4: Initialize Kubernetes Master and Setup Default User

(방법1) 일반 Kuber 를 설치

# swapoff -a

 

[master nodes]

# kubeadm init

kubeadm init을 실행하면 아래와 같이 같 노드의 설정을 수행할 수 있는 가이드를 제공한다.

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.23:6443 --token dbewjg.oi23vpxfef66qx5e \
    --discovery-token-ca-cert-hash sha256:53c31961238eb0b6cc6cf5293d4ab150c0002011e0a4b248732c288239a6bec3

 

[root@master-node ~]# kubeadm token create --print-join-command
W0703 06:00:31.320315   24523 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.0.23:6443 --token c8m6gp.hqnv0gaxfnw7zw0p     --discovery-token-ca-cert-hash sha256:53c31961238eb0b6cc6cf5293d4ab150c0002011e0a4b248732c288239a6bec3

 

[worker nodes]

rm -rf /etc/kubernetes/pki/ca.crt
rm -rf /etc/kubernetes/kubelet.conf
rm -rf $HOME/.kube/config

 

systemctl enable docker.service
systemctl start  docker.service
kubeadm join 192.168.0.23:6443 --token dbewjg.oi23vpxfef66qx5e \
    --discovery-token-ca-cert-hash sha256:53c31961238eb0b6cc6cf5293d4ab150c0002011e0a4b248732c288239a6bec3

 

flannel 을 통해서 Kuber 를 설치

1 단계 : 포드 네트워크 설정

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

2 단계 : kubeadm을 사용하여 클러스터 생성

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

 

[root@master-node ~]# kubeadm token create --print-join-command
W0703 06:00:31.320315   24523 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.0.23:6443 --token c8m6gp.hqnv0gaxfnw7zw0p     --discovery-token-ca-cert-hash sha256:53c31961238eb0b6cc6cf5293d4ab150c0002011e0a4b248732c288239a6bec3

3 단계 : 클러스터 상태 확인

sudo kubectl get pods --all-namespaces

4 단계 : 일반 사용자로 클러스터 관리

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

5 단계 : 작업자 노드를 클러스터에 결합

# 1. 기존 작성된 환경 파일을 삭제한다.
#    [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
#    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

rm -rf /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt
 
# 2. kubelet 중지한다. 
#    [ERROR Port-10250]: Port 10250 is in use를 방지하기  위해서 

systemctl stop kubelet


# 3. admin에서 실행된 kubeadm token create --print-join-command 의 결과 값을 
#    Client 노드에서 실행한다.

kubeadm join --discovery-token cfgrty.1234567890jyrfgd --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443

 

6 단계 : 마스터 노드에서 검사

# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
master-node   Ready    master   6d17h   v1.18.4

 

sudo kubectl get nodes

 

Control-plane node(s)
Protocol	Direction	Port Range	Purpose	Used By	TCP	Inbound	6443*	Kubernetes API server	All	TCP	Inbound	2379-2380	etcd server client API	kube-apiserver, etcd	TCP	Inbound	10250	Kubelet API	Self, Control plane	TCP	Inbound	10251	kube-scheduler	Self	TCP	Inbound	10252	kube-controller-manager	Self
Worker node(s)
Protocol	Direction	Port Range	Purpose	Used By	TCP	Inbound	10250	Kubelet API	Self, Control plane	TCP	Inbound	30000-32767	NodePort Services†	All
† Default port range for NodePort Services.

 

[root@master-node ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:33:59Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

[root@master-node ~]# kubectl get node
NAME          STATUS     ROLES    AGE     VERSION
master-node   NotReady   master   6d11h   v1.18.4

 

Control-plane node(s)

ProtocolDirectionPort RangePurposeUsed By

TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

Worker node(s)

ProtocolDirectionPort RangePurposeUsed By

TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All

† Default port range for NodePort Services.

728x90
반응형