반응형
Pre-requirements
외부 접근 불가능 (On-premise, offline, private, no internet connection)
- Server 4대 또는 이상 (centos 7 기준)
- local-repo (local mirror)
- docker images
- kubespray 진행 시 google에서 다운로드 하는 파일
(ex: calicoctl-linux-amd64, cni-plugins-linux-amd64-v0.9.1.tgz, kubeadm, kubectl, kubelet)
- 위 그림 1안 또는 2안 은 inventory 에서 자유롭게 설정 변경으로 진행 가능
- etcd 경우 서버 2대 이상에 설치 해야함
- 이 가이드는 openstack(mitaka), azure에는 설치가 불가능함 -> network 문제가 있음
(해결 방법 : kubespray 에서 cloud provider 등 추가 설정이 필요함) - 위 이미지에서 baremetal 이라고 적었지만, aws ec2를 이용하여 테스트 진행함
Before Install
외부망이 가능한 환경에서 설치를 진행 하면서 모든 필요한 파일 체크 및 다운로드
- Local mirror 구축
- Kubespray를 이용하여 registry 구축 및 docker images 준비
Local Mirror 구축 (사내망에 Local repository가 없을 경우)
사내망에 Local repo가 없다면 신규 서버에 진행
Local Repository 구축을 위해 필요한 파일 해당 서버로 이관
구축 방법 참고 -> https://choonglee.tistory.com/6
Repository 구축에 필요한 파일 목록
- http://ibk.exntu.kr/local-install.tar
- http://ibk.exntu.kr/repo-1.tar
- http://ibk.exntu.kr/repo-2.tar
- http://ibk.exntu.kr/repo-3.tar
- http://ibk.exntu.kr/repo-4.tar
K8s Install 진행
# local repository를 바라보고 설치 진행
# 구축 방법 참고 -> https://choonglee.tistory.com/6
Baremetal1, Baremetal2, Baremetal3
# 스왑메모리 사용 중지
swapoff -a
# selinux
setenforce 0
# check selinux
getenforce
# ip forward 설정
sysctl -w net.ipv4.ip_forward=1
# ssh 설정 (선택 사항)
vi /etc/ssh/sshd_config
...
PermitRootLogin yes
#PasswordAuthentication yes
...
systemctl restart sshd
# keygen
su root
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/authorized_keys
# baremetal2, baremetal3 에 baremetal1 의 authorized_keys 값을 추가 아래 위치에 추가
vi ~/.ssh/authorized_keys
...
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDOchlKcDVyEeJ191EpqlEZi+rw0cQ6xUo6q0tnx+YPDwEiWkZP1nqQWDD5wBoE+ZAkhgOgxe4W5Cgallb+dMyd1A/LIX9eN5VkQvAvN0e7dxrIw5FPzQdikc74nbG9lIdI1SwZuCVR1koFNTFnVUvA5+V3c/Q1T99sKDW2Lx2WnxEeoI3mc2Cc+uDD/LF0lSQM3GtAn8/TNLCvAyjWZB0bQk7HNOwCaXBsarbqkK/saQMw+n0w3rXlvyAD67nBpj4eMWFydOz71THhl4DrwNO8f7S6wypfpCIxvD8dpYurEzq/DNu9yu58iLRppDP0Wo0L6LBE+BQQS9BS67dJjv4V root@ip-172-31-31-252.ap-northeast-2.compute.internal
...
# kubespray를 이용하여 설치 할 때 필요한 file
# 검색을 "file:///" 로 하면 어디서 사용 하는지 알 수 있음
mkdir -p /root/kubespray/resources/
cd /root/kubespray/resources/
tar -xvf master-k8s-resources.tar
tar -xvf worker-k8s-resources.tar
Baremetal1, Baremetal2, Baremetal3, Baremetal4
Local Repository를 바라보도록 설정
cd /etc/yum.repos.d
mkdir backup
mv /etc/yum.repos.d/* /etc/yum.repos.d/backup
vi local-repos.repo
...
[local-base]
name=CentOS Base
baseurl=http://172.10.0.39/base/
gpgcheck=0
enabled=1
[local-centosplus]
name=CentOS Plus
baseurl=http://172.10.0.39/centosplus/
gpgcheck=0
enabled=1
[local-docker-ce]
name=docker-ce
baseurl=http://172.10.0.39/docker-ce/
gpgcheck=0
enabled=1
[local-epel]
name=epel
baseurl=http://172.10.0.39/epel/
gpgcheck=0
enabled=1
[local-extras]
name=CentOS extras
baseurl=http://172.10.0.39/extras/
gpgcheck=0
enabled=1
[local-ius]
name=ius
baseurl=http://172.10.0.39/ius/
gpgcheck=0
enabled=1
[local-kubernetes]
name=kubernetes
baseurl=http://172.10.0.39/kubernetes/
gpgcheck=0
enabled=1
[local-salt-latest]
name=salt-latest
baseurl=http://172.10.0.39/salt-latest/
gpgcheck=0
enabled=1
[local-updates]
name=updates
baseurl=http://172.10.0.39/updates/
check=0
enabled=1
...
yum clean all
yum repolist all
# docker registry 도 local 바라보도록 설정
# ip는 registry 주소
vi /etc/docker/daemon.json
...
{ "insecure-registries":["172.10.0.37:5000"] }
...
Baremetal4 (docker registry)
sudo yum install -y gcc gcc-c++ wget perl make git vim wget openssl-libs openssl openssl-devel libsepol-devel libselinux-python device-mapper-libs ebtables python-httplib2 openssl curl rsync bash-completion socat unzip python-setuptools python-pip python36 python36-libs docker-ce vsftpd deltarpm python-deltarpm
systemctl start docker
cd /data
tar -xvf kubespray-offline-git.tar
cd kubespray
cd contrib/offline/
# container-images.tar.gz 파일이 있어야 register가 됨
# 과거에 설치할 때 다운받아 놓은 docker images들을 밀어 넣음
./manage-offline-container-images.sh register
# 위 스크립트를 실행을 하면 registry를 실행하는데, 그렇게 생성하는것 보다는 아래 커맨드를 이용하는걸 추천
docker run -d -p 5000:5000 --restart=always --name registry -e REGISTRY_VALIDATION_DISABLED=true -v /registry:/var/lib/registry registry
# 주소가 맞게 되어 있는지 꼭 체크 ! (vi /etc/docker/daemon.json)
docker registry 에 image 등록
docker registry 등록 할 때
docker load -i docker.io-kubernetesui-dashboard-amd64-v2.2.0.tar
docker tag k8s.gcr.io/kube-proxy:v1.20.6 127.0.0.1:5000/kube-proxy:v1.20.6
docker push 127.0.0.1:5000/kube-proxy:v1.20.6
# 위 와 같이 등록 + ip는 127.0.0.1 로 진행
# check repositories
curl -v http://${docker-private-registry-ip}:${port}/v2/_catalog
docker load -i docker.io-kubernetesui-dashboard-amd64-v2.2.0.tar
docker load -i docker.io-kubernetesui-metrics-scraper-v1.0.6.tar
docker load -i docker.io-library-nginx-1.19.tar
docker load -i docker.io-library-registry-2.7.1.tar
docker load -i k8s.gcr.io-addon-resizer-1.8.11.tar
docker load -i k8s.gcr.io-coredns-1.7.0.tar
docker load -i k8s.gcr.io-cpa-cluster-proportional-autoscaler-amd64-1.8.3.tar
docker load -i k8s.gcr.io-dns-k8s-dns-node-cache-1.17.1.tar
docker load -i k8s.gcr.io-ingress-nginx-controller-v0.43.0.tar
docker load -i k8s.gcr.io-kube-apiserver-v1.20.6.tar
docker load -i k8s.gcr.io-kube-controller-manager-v1.20.6.tar
docker load -i k8s.gcr.io-kube-proxy-v1.20.6.tar
docker load -i k8s.gcr.io-kube-registry-proxy-0.4.tar
docker load -i k8s.gcr.io-kube-scheduler-v1.20.6.tar
docker load -i k8s.gcr.io-metrics-server-metrics-server-v0.4.2.tar
docker load -i k8s.gcr.io-pause-3.2.tar
docker load -i k8s.gcr.io-pause-3.3.tar
docker load -i quay.io-calico-cni-v3.17.3.tar
docker load -i quay.io-calico-kube-controllers-v3.17.3.tar
docker load -i quay.io-calico-node-v3.17.3.tar
docker load -i quay.io-coreos-etcd-v3.4.13.tar
k8s.gcr.io/kube-proxy:v1.20.6
k8s.gcr.io/kube-apiserver:v1.20.6
k8s.gcr.io/kube-scheduler:v1.20.6
k8s.gcr.io/kube-controller-manager:v1.20.6
registry:2.7.1
nginx:1.19
k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1
quay.io/calico/node:v3.17.3
quay.io/calico/cni:v3.17.3
quay.io/calico/kube-controllers:v3.17.3
kubernetesui/dashboard-amd64:v2.2.0
kubernetesui/dashboard-amd64:v2.2.0
k8s.gcr.io/metrics-server/metrics-server:v0.4.2
k8s.gcr.io/ingress-nginx/controller:v0.43.0
kubernetesui/metrics-scraper:v1.0.6
kubernetesui/metrics-scraper:v1.0.6
quay.io/coreos/etcd:v3.4.13
k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3
k8s.gcr.io/addon-resizer:1.8.11
k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/kube-registry-proxy:0.4
127.0.0.1:5000/kube-proxy:v1.20.6
127.0.0.1:5000/kube-apiserver:v1.20.6
127.0.0.1:5000/kube-scheduler:v1.20.6
127.0.0.1:5000/kube-controller-manager:v1.20.6
127.0.0.1:5000/registry:2.7.1
127.0.0.1:5000/nginx:1.19
127.0.0.1:5000/dns/k8s-dns-node-cache:1.17.1
127.0.0.1:5000/calico/node:v3.17.3
127.0.0.1:5000/calico/cni:v3.17.3
127.0.0.1:5000/calico/kube-controllers:v3.17.3
127.0.0.1:5000/kubernetesui/dashboard-amd64:v2.2.0
127.0.0.1:5000/dashboard-amd64:v2.2.0
127.0.0.1:5000/metrics-server/metrics-server:v0.4.2
127.0.0.1:5000/ingress-nginx/controller:v0.43.0
127.0.0.1:5000/kubernetesui/metrics-scraper:v1.0.6
127.0.0.1:5000/metrics-scraper:v1.0.6
127.0.0.1:5000/coreos/etcd:v3.4.13
127.0.0.1:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
127.0.0.1:5000/addon-resizer:1.8.11
127.0.0.1:5000/coredns:1.7.0
127.0.0.1:5000/pause:3.3
127.0.0.1:5000/pause:3.2
127.0.0.1:5000/kube-registry-proxy:0.4
172.10.0.37:5000/kube-proxy:v1.20.6
172.10.0.37:5000/kube-apiserver:v1.20.6
172.10.0.37:5000/kube-scheduler:v1.20.6
172.10.0.37:5000/kube-controller-manager:v1.20.6
172.10.0.37:5000/registry:2.7.1
172.10.0.37:5000/nginx:1.19
172.10.0.37:5000/dns/k8s-dns-node-cache:1.17.1
172.10.0.37:5000/calico/node:v3.17.3
172.10.0.37:5000/calico/cni:v3.17.3
172.10.0.37:5000/calico/kube-controllers:v3.17.3
172.10.0.37:5000/kubernetesui/dashboard-amd64:v2.2.0
172.10.0.37:5000/dashboard-amd64:v2.2.0
172.10.0.37:5000/metrics-server/metrics-server:v0.4.2
172.10.0.37:5000/ingress-nginx/controller:v0.43.0
172.10.0.37:5000/kubernetesui/metrics-scraper:v1.0.6
172.10.0.37:5000/metrics-scraper:v1.0.6
172.10.0.37:5000/coreos/etcd:v3.4.13
172.10.0.37:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
172.10.0.37:5000/addon-resizer:1.8.11
172.10.0.37:5000/coredns:1.7.0
172.10.0.37:5000/pause:3.3
172.10.0.37:5000/pause:3.2
172.10.0.37:5000/kube-registry-proxy:0.4
docker tag k8s.gcr.io/kube-proxy:v1.20.6 127.0.0.1:5000/kube-proxy:v1.20.6
docker tag k8s.gcr.io/kube-apiserver:v1.20.6 127.0.0.1:5000/kube-apiserver:v1.20.6
docker tag k8s.gcr.io/kube-scheduler:v1.20.6 127.0.0.1:5000/kube-scheduler:v1.20.6
docker tag k8s.gcr.io/kube-controller-manager:v1.20.6 127.0.0.1:5000/kube-controller-manager:v1.20.6
docker tag registry:2.7.1 127.0.0.1:5000/registry:2.7.1
docker tag nginx:1.19 127.0.0.1:5000/nginx:1.19
docker tag k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1 127.0.0.1:5000/dns/k8s-dns-node-cache:1.17.1
docker tag quay.io/calico/node:v3.17.3 127.0.0.1:5000/calico/node:v3.17.3
docker tag quay.io/calico/cni:v3.17.3 127.0.0.1:5000/calico/cni:v3.17.3
docker tag quay.io/calico/kube-controllers:v3.17.3 127.0.0.1:5000/calico/kube-controllers:v3.17.3
docker tag kubernetesui/dashboard-amd64:v2.2.0 127.0.0.1:5000/kubernetesui/dashboard-amd64:v2.2.0
docker tag kubernetesui/dashboard-amd64:v2.2.0 127.0.0.1:5000/dashboard-amd64:v2.2.0
docker tag k8s.gcr.io/metrics-server/metrics-server:v0.4.2 127.0.0.1:5000/metrics-server/metrics-server:v0.4.2
docker tag k8s.gcr.io/ingress-nginx/controller:v0.43.0 127.0.0.1:5000/ingress-nginx/controller:v0.43.0
docker tag kubernetesui/metrics-scraper:v1.0.6 127.0.0.1:5000/kubernetesui/metrics-scraper:v1.0.6
docker tag kubernetesui/metrics-scraper:v1.0.6 127.0.0.1:5000/metrics-scraper:v1.0.6
docker tag quay.io/coreos/etcd:v3.4.13 127.0.0.1:5000/coreos/etcd:v3.4.13
docker tag k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3 127.0.0.1:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
docker tag k8s.gcr.io/addon-resizer:1.8.11 127.0.0.1:5000/addon-resizer:1.8.11
docker tag k8s.gcr.io/coredns:1.7.0 127.0.0.1:5000/coredns:1.7.0
docker tag k8s.gcr.io/pause:3.3 127.0.0.1:5000/pause:3.3
docker tag k8s.gcr.io/pause:3.2 127.0.0.1:5000/pause:3.2
docker tag k8s.gcr.io/kube-registry-proxy:0.4 127.0.0.1:5000/kube-registry-proxy:0.4
docker run -d -p 5000:5000 --restart=always --name registry -v /registry:/var/lib/registry registry
docker push 127.0.0.1:5000/kube-proxy:v1.20.6
docker push 127.0.0.1:5000/kube-apiserver:v1.20.6
docker push 127.0.0.1:5000/kube-scheduler:v1.20.6
docker push 127.0.0.1:5000/kube-controller-manager:v1.20.6
docker push 127.0.0.1:5000/registry:2.7.1
docker push 127.0.0.1:5000/nginx:1.19
docker push 127.0.0.1:5000/dns/k8s-dns-node-cache:1.17.1
docker push 127.0.0.1:5000/calico/node:v3.17.3
docker push 127.0.0.1:5000/calico/cni:v3.17.3
docker push 127.0.0.1:5000/calico/kube-controllers:v3.17.3
docker push 127.0.0.1:5000/kubernetesui/dashboard-amd64:v2.2.0
docker push 127.0.0.1:5000/dashboard-amd64:v2.2.0
docker push 127.0.0.1:5000/metrics-server/metrics-server:v0.4.2
docker push 127.0.0.1:5000/ingress-nginx/controller:v0.43.0
docker push 127.0.0.1:5000/kubernetesui/metrics-scraper:v1.0.6
docker push 127.0.0.1:5000/metrics-scraper:v1.0.6
docker push 127.0.0.1:5000/coreos/etcd:v3.4.13
docker push 127.0.0.1:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
docker push 127.0.0.1:5000/addon-resizer:1.8.11
docker push 127.0.0.1:5000/coredns:1.7.0
docker push 127.0.0.1:5000/pause:3.3
docker push 127.0.0.1:5000/pause:3.2
docker push 127.0.0.1:5000/kube-registry-proxy:0.4
docker tag k8s.gcr.io/kube-proxy:v1.20.6 172.10.0.37:5000/kube-proxy:v1.20.6
docker tag k8s.gcr.io/kube-apiserver:v1.20.6 172.10.0.37:5000/kube-apiserver:v1.20.6
docker tag k8s.gcr.io/kube-scheduler:v1.20.6 172.10.0.37:5000/kube-scheduler:v1.20.6
docker tag k8s.gcr.io/kube-controller-manager:v1.20.6 172.10.0.37:5000/kube-controller-manager:v1.20.6
docker tag registry:2.7.1 172.10.0.37:5000/registry:2.7.1
docker tag nginx:1.19 172.10.0.37:5000/nginx:1.19
docker tag k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1 172.10.0.37:5000/dns/k8s-dns-node-cache:1.17.1
docker tag quay.io/calico/node:v3.17.3 172.10.0.37:5000/calico/node:v3.17.3
docker tag quay.io/calico/cni:v3.17.3 172.10.0.37:5000/calico/cni:v3.17.3
docker tag quay.io/calico/kube-controllers:v3.17.3 172.10.0.37:5000/calico/kube-controllers:v3.17.3
docker tag kubernetesui/dashboard-amd64:v2.2.0 172.10.0.37:5000/kubernetesui/dashboard-amd64:v2.2.0
docker tag kubernetesui/dashboard-amd64:v2.2.0 172.10.0.37:5000/dashboard-amd64:v2.2.0
docker tag k8s.gcr.io/metrics-server/metrics-server:v0.4.2 172.10.0.37:5000/metrics-server/metrics-server:v0.4.2
docker tag k8s.gcr.io/ingress-nginx/controller:v0.43.0 172.10.0.37:5000/ingress-nginx/controller:v0.43.0
docker tag kubernetesui/metrics-scraper:v1.0.6 172.10.0.37:5000/kubernetesui/metrics-scraper:v1.0.6
docker tag kubernetesui/metrics-scraper:v1.0.6 172.10.0.37:5000/metrics-scraper:v1.0.6
docker tag quay.io/coreos/etcd:v3.4.13 172.10.0.37:5000/coreos/etcd:v3.4.13
docker tag k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3 172.10.0.37:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
docker tag k8s.gcr.io/addon-resizer:1.8.11 172.10.0.37:5000/addon-resizer:1.8.11
docker tag k8s.gcr.io/coredns:1.7.0 172.10.0.37:5000/coredns:1.7.0
docker tag k8s.gcr.io/pause:3.3 172.10.0.37:5000/pause:3.3
docker tag k8s.gcr.io/pause:3.2 172.10.0.37:5000/pause:3.2
docker tag k8s.gcr.io/kube-registry-proxy:0.4 172.10.0.37:5000/kube-registry-proxy:0.4
docker push 172.10.0.37:5000/kube-proxy:v1.20.6
docker push 172.10.0.37:5000/kube-apiserver:v1.20.6
docker push 172.10.0.37:5000/kube-scheduler:v1.20.6
docker push 172.10.0.37:5000/kube-controller-manager:v1.20.6
docker push 172.10.0.37:5000/registry:2.7.1
docker push 172.10.0.37:5000/nginx:1.19
docker push 172.10.0.37:5000/dns/k8s-dns-node-cache:1.17.1
docker push 172.10.0.37:5000/calico/node:v3.17.3
docker push 172.10.0.37:5000/calico/cni:v3.17.3
docker push 172.10.0.37:5000/calico/kube-controllers:v3.17.3
docker push 172.10.0.37:5000/kubernetesui/dashboard-amd64:v2.2.0
docker push 172.10.0.37:5000/dashboard-amd64:v2.2.0
docker push 172.10.0.37:5000/metrics-server/metrics-server:v0.4.2
docker push 172.10.0.37:5000/ingress-nginx/controller:v0.43.0
docker push 172.10.0.37:5000/kubernetesui/metrics-scraper:v1.0.6
docker push 172.10.0.37:5000/metrics-scraper:v1.0.6
docker push 172.10.0.37:5000/coreos/etcd:v3.4.13
docker push 172.10.0.37:5000/cpa/cluster-proportional-autoscaler-amd64:1.8.3
docker push 172.10.0.37:5000/addon-resizer:1.8.11
docker push 172.10.0.37:5000/coredns:1.7.0
docker push 172.10.0.37:5000/pause:3.3
docker push 172.10.0.37:5000/pause:3.2
docker push 172.10.0.37:5000/kube-registry-proxy:0.4
Baremetal1
# sudo yum install epel-release -y
sudo yum install -y gcc gcc-c++ wget perl make git vim wget openssl-libs openssl openssl-devel libsepol-devel libselinux-python device-mapper-libs ebtables python-httplib2 openssl curl rsync bash-completion socat unzip python-setuptools python-pip python36 python36-libs docker vsftpd deltarpm python-deltarpm
systemctl start docker
mkdir /data/pip-downloadonly/pip-kubespray-requirements
cd /data/pip-downloadonly/pip-kubespray-requirements
cat <<EOF > /data/pip-downloadonly/pip-kubespray-requirements/requirements.txt
ansible==2.9.20
cryptography==2.8
jinja2==2.11.3
netaddr==0.7.19
pbr==5.4.4
jmespath==0.9.5
ruamel.yaml==0.16.10
EOF
# 설치 할 때 사용하는 command
# pip download -d /data/pip-downloadonly/pip-kubespray-requirements -r requirements.txt
mkdir -p /data/pip-downloadonly/pip-kubespray-requirements
cd /data/pip-downloadonly/pip-kubespray-requirements
tar -xvf pip-packages.tar
pip install -r kubespray/requirements.txt --find-links=/data/pip-downloadonly/pip-kubespray-requirements/ --no-index
# root 사용자로 진행
sudo -i
# kubespray-offline-git.tar 를 이용
(https://github.com/choonglee/kubespray)
cd /data
tar -xvf kubespray-offline-git.tar
cd kubespray
cd contrib/offline/
# container-images.tar.gz 파일이 있어야 register가 됨
# 과거에 설치할 때 다운받아 놓은 docker images들을 밀어 넣음
./manage-offline-container-images.sh register
# inventory 설정
cp -rfp inventory/sample/ inventory/mycluster
# kubespray host 설정
vi ~/kubespray/inventory/mycluster/inventory.ini
...
[all]
node1 ansible_ssh_host=172.31.31.252 ip=172.31.31.252 etcd_member_name=etcd1
node2 ansible_ssh_host=172.31.30.39 ip=172.31.30.39 etcd_member_name=etcd2
node3 ansible_ssh_host=172.31.25.227 ip=172.31.25.227 etcd_member_name=etcd3
[kube-master]
node1
[etcd]
node1
node2
node3
[kube-node]
node1
node2
node3
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
...
# 아래 부분만 수정 또는 추가
vi inventory/mycluster/group_vars/all/offline.yml
...
---
## Global Offline settings
### Private Container Image Registry
registry_host: "172.10.0.37:5000"
# files_repo: "http://myprivatehttpd"
### If using CentOS, RedHat, AlmaLinux or Fedora
yum_repo: "http://172.10.0.33"
### If using Debian
# debian_repo: "http://myinternaldebianrepo"
### If using Ubuntu
# ubuntu_repo: "http://myinternalubunturepo"
## Container Registry overrides
kube_image_repo: "{{ registry_host }}"
gcr_image_repo: "{{ registry_host }}"
docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"
kubeadm_download_url: "file:///root/kubespray/resources/kubeadm"
kubectl_download_url: "file:///root/kubespray/resources/kubectl"
kubelet_download_url: "file:///root/kubespray/resources/kubelet"
## CNI Plugins
cni_download_url: "file:///root/kubespray/resources/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
# [Optional] Calico: If using Calico network plugin
calicoctl_download_url: "file:///root/kubespray/resources/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
calico_crds_download_url: "file:///root/kubespray/resources/calico-{{ calico_version }}.tar.gz"
# [Optional] helm: only if you set helm_enabled: true
helm_download_url: "file:///root/kubespray/resources/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
# repository address (local mirror)
docker_rh_repo_base_url: "http://172.10.0.38"
docker_rh_repo_gpgkey: "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7"
...
# 아래 부분만 수정 또는 추가
vi inventory/mycluster/group_vars/k8s-cluster/addons.yml
...
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true
# Helm deployment
helm_enabled: true
# Registry deployment
registry_enabled: true
metrics_server_enabled: true
local_path_provisioner_enabled: false
local_volume_provisioner_enabled: false
cephfs_provisioner_enabled: false
rbd_provisioner_enabled: false
ingress_nginx_enabled: true
ingress_publish_status_address: ""
ingress_ambassador_enabled: false
ingress_alb_enabled: false
# Cert manager deployment
cert_manager_enabled: false
# MetalLB deployment
metallb_enabled: false
...
# 아래 설정은 그대로 복붙
vi inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
...
---
# Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernetes.
# This puts them in a sane location and namespace.
# Editing those values will almost surely break something.
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
# This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl"
# This is where all of the bearer tokens will be stored
kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.20.6
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5
# This is the group that the cert creation scripts chgrp the
# cert files to. Not really changeable...
kube_cert_group: kube-cert
# Cluster Loglevel configuration
kube_log_level: 2
# Directory where credentials will be stored
credentials_dir: "{{ inventory_dir }}/credentials"
## It is possible to activate / deactivate selected authentication methods (oidc, static token auth)
# kube_oidc_auth: false
# kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)
# kube_oidc_url: https:// ...
# kube_oidc_client_id: kubernetes
## Optional settings for OIDC
# kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
# kube_oidc_username_claim: sub
# kube_oidc_username_prefix: 'oidc:'
# kube_oidc_groups_claim: groups
# kube_oidc_groups_prefix: 'oidc:'
## Variables to control webhook authn/authz
# kube_webhook_token_auth: false
# kube_webhook_token_auth_url: https://...
# kube_webhook_token_auth_url_skip_tls_verify: false
## For webhook authorization, authorization_modes must include Webhook
# kube_webhook_authorization: false
# kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false
# Choose network plugin (cilium, calico, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
kube_network_plugin_multus: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
# internal network node size allocation (optional). This is the size allocated
# to each node for pod IP address allocation. Note that the number of pods per node is
# also limited by the kubelet_max_pods variable which defaults to 110.
#
# Example:
# Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
# - kube_pods_subnet: 10.233.64.0/18
# - kube_network_node_prefix: 24
# - kubelet_max_pods: 110
#
# Example:
# Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
# - kube_pods_subnet: 10.233.64.0/18
# - kube_network_node_prefix: 25
# - kubelet_max_pods: 110
kube_network_node_prefix: 24
# Configure Dual Stack networking (i.e. both IPv4 and IPv6)
enable_dual_stack_networks: false
# Kubernetes internal network for IPv6 services, unused block of space.
# This is only used if enable_dual_stack_networks is set to true
# This provides 4096 IPv6 IPs
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116
# Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
# This network must not already be in your network infrastructure!
# This is only used if enable_dual_stack_networks is set to true.
# This provides room for 256 nodes with 254 pods per node.
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# IPv6 subnet size allocated to each for pods.
# This is only used if enable_dual_stack_networks is set to true
# This provides room for 254 pods per node.
kube_network_node_prefix_ipv6: 120
# The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https)
# kube_apiserver_insecure_port: 8080 # (http)
# Set to 0 to disable insecure port - Requires RBAC in authorization_modes and kube_api_anonymous_auth: true
kube_apiserver_insecure_port: 0 # (disabled)
# Kube-proxy proxyMode configuration.
# Can be ipvs, iptables
#kube_proxy_mode: ipvs
kube_proxy_mode: iptables
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB to work
kube_proxy_strict_arp: false
# A string slice of values which specify the addresses to use for NodePorts.
# Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32).
# The default empty string slice ([]) means to use all local addresses.
# kube_proxy_nodeport_addresses_cidr is retained for legacy config
kube_proxy_nodeport_addresses: >-
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
[{{ kube_proxy_nodeport_addresses_cidr }}]
{%- else -%}
[]
{%- endif -%}
# If non-empty, will use this string as identification instead of the actual hostname
# kube_override_hostname: >-
# {%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%}
# {%- else -%}
# {{ inventory_hostname }}
# {%- endif -%}
## Encrypting Secret Data at Rest (experimental)
kube_encrypt_secret_data: false
# DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.ibk
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: false
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
# Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: none
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
## Container runtime
## docker for docker, crio for cri-o and containerd for containerd.
container_manager: docker
# Additional container runtimes
kata_containers_enabled: false
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# audit log for kubernetes
kubernetes_audit: false
# dynamic kubelet configuration
dynamic_kubelet_configuration: false
# define kubelet config dir for dynamic kubelet
# kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
dynamic_kubelet_configuration_dir: "{{ kubelet_config_dir | default(default_kubelet_config_dir) }}"
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
# Custom PodSecurityPolicySpec for restricted policy
# podsecuritypolicy_restricted_spec: {}
# Custom PodSecurityPolicySpec for privileged policy
# podsecuritypolicy_privileged_spec: {}
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods
## Optionally reserve resources for OS system daemons.
# system_reserved: true
## Uncomment to override default values
# system_memory_reserved: 512Mi
# system_cpu_reserved: 500m
## Reservation for master hosts
# system_master_memory_reserved: 256Mi
# system_master_cpu_reserved: 250m
# An alternative flexvolume plugin directory
# kubelet_flexvolumes_plugins_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
## Supplementary addresses that can be added in kubernetes ssl keys.
## That can be useful for example to setup a keepalived virtual IP
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
## Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
## See https://github.com/kubernetes-sigs/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false
## Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI,
## AWS EBS CSI, Azure Disk CSI, GCP Persistent Disk CSI)
persistent_volumes_enabled: false
## Container Engine Acceleration
## Enable container acceleration feature, for example use gpu acceleration in containers
# nvidia_accelerator_enabled: true
## Nvidia GPU driver install. Install will by done by a (init) pod running as a daemonset.
## Important: if you use Ubuntu then you should set in all.yml 'docker_storage_options: -s overlay2'
## Array with nvida_gpu_nodes, leave empty or comment if you don't want to install drivers.
## Labels and taints won't be set to nodes if they are not in the array.
# nvidia_gpu_nodes:
# - kube-gpu-001
# nvidia_driver_version: "384.111"
## flavor can be tesla or gtx
# nvidia_gpu_flavor: gtx
## NVIDIA driver installer images. Change them if you have trouble accessing gcr.io.
# nvidia_driver_install_centos_container: atzedevries/nvidia-centos-driver-installer:2
# nvidia_driver_install_ubuntu_container: gcr.io/google-containers/ubuntu-nvidia-driver-installer@sha256:7df76a0f0a17294e86f691c81de6bbb7c04a1b4b3d4ea4e7e2cccdc42e1f6d63
## NVIDIA GPU device plugin image.
# nvidia_gpu_device_plugin_container: "k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"
## Support tls min version, Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
# tls_min_version: ""
## Support tls cipher suites.
# tls_cipher_suites: {}
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
# - TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
# - TLS_ECDHE_RSA_WITH_RC4_128_SHA
# - TLS_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA256
# - TLS_RSA_WITH_AES_128_GCM_SHA256
# - TLS_RSA_WITH_AES_256_CBC_SHA
# - TLS_RSA_WITH_AES_256_GCM_SHA384
# - TLS_RSA_WITH_RC4_128_SHA
## Amount of time to retain events. (default 1h0m0s)
event_ttl_duration: "1h0m0s"
## Automatically renew K8S control plane certificates on first Monday of each month
auto_renew_certificates: false
# First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
...
# 네트워크 문제로 설치 안될 경우 기본 설정인 iptables 로 변경
# (IPVS = ip virtual server)
vi roles/kubespray-defaults/defaults/main.yaml
...
## Kube Proxy mode One of ['iptables','ipvs']
kube_proxy_mode: iptables
...
# ansible playbook 실행
ansible-playbook --flush-cache -u root -b -i inventory/mycluster/inventory.ini cluster.yml -v
# playbook option 중 -e ignore_assert_errors 경우 에러 무시 하고 설치 진행
# ansible-playbook --flush-cache -u root -b -i inventory/mycluster/inventory.ini cluster.yml -v -e
ignore_assert_errors=yes
# 위 ansible-playbook이 성공적으로 완료가 되면, kubespray로 k8s 설치 완료
# kubectl cli 없을 경우 설치 방법
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
# check
kubectl get nodes --all-namespaces -o wide
kubectl get pod --all-namespaces -o wide
Troubleshooting
- 과거와 달리 etcd 는 무조건 홀수로 진행 되어야 함 (과거 처럼 무조건 2대 이상이 아님)
- 아래 설정 파일 필수
vi /etc/docker/daemon.json
...
{"insecure-registries":["172.0.0.123:5000"]}
...
- offline.yml 파일에서 헷갈리는거 체크함
(inventory/mycluster/group_vars/all/offline.yml)
###### docker registry 실행되어 있는 vm + port 까지 작성
registry_host: "172.10.0.37:5000"
###### local mirror port 까지 작성
yum_repo: "http://172.10.0.33:18080"
###### 설치 되는 대상의 서버에 해당 파일이 위치되어 있어야 함
kubeadm_download_url: "file:///root/kubespray/resources/kubeadm"
kubectl_download_url: "file:///root/kubespray/resources/kubectl"
kubelet_download_url: "file:///root/kubespray/resources/kubelet"
## CNI Plugins
cni_download_url: "file:///root/kubespray/resources/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
# [Optional] Calico: If using Calico network plugin
calicoctl_download_url: "file:///root/kubespray/resources/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
calico_crds_download_url: "file:///root/kubespray/resources/calico-{{ calico_version }}.tar.gz"
# [Optional] helm: only if you set helm_enabled: true
helm_download_url: "file:///root/kubespray/resources/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
##### local mirror curl 실행 했을 때 나와야함
##### (curl -v http://172.0.0.38:18080/repo/repodata/repomd.xml
##### ansible 에서 (/repodata/repomd.xml)는 붙여줌 root context 잘 설정 해주면 됨
# repository address (local mirror)
docker_rh_repo_base_url: "http://172.10.0.38:18080/repo"
docker_rh_repo_gpgkey: "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7"
- 설치 이 후 nodelocaldns 삭제 진행
kubectl get pod --all-namespace
kubectl get daemonset -n kube-system
kubectl delete daemonset nodelocaldns -n kube-system
반응형
'Computing > Kubernetes' 카테고리의 다른 글
Install Kubernetes Using Kubespray (Version 0.2) (0) | 2021.06.16 |
---|---|
Install Kubernetes Using Kubespray (Version 0.1) (0) | 2021.04.28 |