在 CentOS 7 上离线安装 Kubernetes(K8s)
现在,使用 rpm 命令从 ~/k8s 目录安装 Kubernetes (K8s) 包。
[root@docker-offline ~]# rpm -ivh --replacefiles --replacepkgs ~/k8s/*.rpm warning: 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY Preparing... ################################# [100%]
为 kubectl 启用 bash 完成。
[root@docker-offline ~]# source <(kubectl completion bash) [root@docker-offline ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
将docker镜像的tar文件导入到Docker中。
[root@docker-offline ~]# docker load < ~/k8s/coredns.tar fb61a074724d: Loading layer 479.7kB/479.7kB c6a5fc8a3f01: Loading layer 40.05MB/40.05MB Loaded image: k8s.gcr.io/coredns:1.3.1 [root@docker-offline ~]# docker load < ~/k8s/kube-proxy.tar 5ba3be777c2d: Loading layer 43.88MB/43.88MB 0b8d2e946c93: Loading layer 3.403MB/3.403MB 8b9a8fc88f0d: Loading layer 36.69MB/36.69MB Loaded image: k8s.gcr.io/kube-proxy:v1.14.1 [root@docker-offline ~]# docker load < ~/k8s/etcd.tar 8a788232037e: Loading layer 1.37MB/1.37MB 30796113fb51: Loading layer 232MB/232MB 6fbfb277289f: Loading layer 24.98MB/24.98MB Loaded image: k8s.gcr.io/etcd:3.3.10 [root@docker-offline ~]# docker load < ~/k8s/kube-scheduler.tar e04ef32df86e: Loading layer 39.26MB/39.26MB Loaded image: k8s.gcr.io/kube-scheduler:v1.14.1 [root@docker-offline ~]# docker load < ~/k8s/kube-apiserver.tar 97f70f3a7a0c: Loading layer 167.6MB/167.6MB Loaded image: k8s.gcr.io/kube-apiserver:v1.14.1 [root@docker-offline ~]# docker load < ~/k8s/pause.tar e17133b79956: Loading layer 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1 [root@docker-offline ~]# docker load < ~/k8s/kube-controller-manager.tar d8ca6e1aa16e: Loading layer 115.6MB/115.6MB Loaded image: k8s.gcr.io/kube-controller-manager:v1.14.1
现在,我们在本地注册表中拥有所有必需的 Docker 镜像。
因此,我们可以将这台 CentOS 7 机器配置为 Kubernetes (K8s) 主节点或者工作节点。
我们没有配置任何主节点。
因此,我们必须先配置一个主节点。
[root@docker-offline ~]# kubeadm init I0427 20:36:47.088327 18015 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0427 20:36:47.090078 18015 version.go:97] falling back to the local client version: v1.14.1 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [docker-offline.onitroad.com localhost] and IPs [192.168.1.159 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [docker-offline.onitroad.com localhost] and IPs [192.168.1.159 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [docker-offline.onitroad.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.159] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 27.538706 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node docker-offline.onitroad.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node docker-offline.onitroad.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 6e4ntu.a5r1md9vuqex4pe8 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.159:6443 --token 6e4ntu.a5r1md9vuqex4pe8 \ --discovery-token-ca-cert-hash sha256:19f4d9f6d433cc12addb70e2737c629213777deed28fa5dcc33f9d05d2382d5b
执行建议的脚本。
[root@docker-offline ~]# mkdir -p $HOME/.kube [root@docker-offline ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@docker-offline ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
启动并启用 kubelet.service 。
[root@docker-offline ~]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@docker-offline ~]# systemctl start kubelet.service
添加法兰绒网络。
[root@docker-offline ~]# kubectl apply -f ~/k8s/kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
列出 Kubernetes (K8s) 集群中的节点。
[root@docker-offline ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION docker-offline.onitroad.com NotReady master 5m9s v1.14.1
我们已经在 CentOS 7 上成功离线安装了 Kubernetes(K8s)。
安装环境
我们已经配置了两台 CentOS 7服务器。
主机名: | docker-online.onitroad.com |
操作系统: | CentOS 7.6 |
联网: | 是 |
Docker 版本: | Docker CE 18.09 |
在 CentOS 7 上离线安装 Docker
我们已经写了一篇完整的文章在离线 CentOS 7 机器上安装 Docker CE。
因此,建议我们在两台机器上安装Kubernetes(K8s)之前,先按照那篇文章安装Docker CE。
我们还需要在 docker-online.onitroad.com 上安装 Docker CE,因为我们将使用 docker 命令从 Docker Hub 拉取所需的图像。
使用 ssh 作为 root 用户连接到 docker-offline.onitroad.com。
安装 Docker CE 后,我们必须将其配置为与 Kubernetes (K8s) 一起使用。
[root@docker-offline ~]# mkdir /etc/docker [root@docker-offline ~]# cat > /etc/docker/daemon.json << EOF > { > "exec-opts": ["native.cgroupdriver=systemd"], > "log-driver": "json-file", > "log-opts": { > "max-size": "100m" > }, > "storage-driver": "overlay2", > "storage-opts": [ > "overlay2.override_kernel_check=true" > ] > } > EOF
重启 docker.service 。
[root@docker-offline ~]# systemctl restart docker.service
为 Kubernetes (K8s) 配置准备工作
使用 ssh 作为 root 用户连接到 docker-offline.onitroad.com。
根据 Kubernetes (K8s) 的要求设置 Kernel 参数。
[root@docker-offline ~]# cat > /etc/sysctl.d/kubernetes.conf << EOF > net.ipv4.ip_forward = 1 > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF
重新加载内核参数配置文件。
[root@docker-offline ~]# modprobe br_netfilter [root@docker-offline ~]# sysctl --system
关闭 Swap for Kubernetes (K8s) 安装。
[root@docker-offline ~]# swapoff -a [root@docker-offline ~]# sed -e '/swap/s/^/#/g' -i /etc/fstab
在 Linux 防火墙中允许 Kubernetes (K8s) 服务端口。
对于主节点
[root@docker-offline ~]# firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp success
对于 Worker 节点
[root@docker-offline ~]# firewall-cmd --permanent --add-port={10250,30000-32767}/tcp success
重新加载防火墙配置。
[root@docker-offline ~]# firewall-cmd --reload success
使用以下命令将 SELinux 切换到 Permissive 模式。
[root@docker-offline ~]# setenforce 0 [root@docker-offline ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Kubernetes (K8s) 使用容器化平台,如 Docker、containerd 等。
并且需要一个 Registry 来下载和使用 Docker 镜像。
Docker Hub 是服务于此目的的全球公共注册中心。
但是,有些情况下,我们想在私有网络中使用 Kubernetes(K8s)。
在这种情况下,我们无法访问 Docker Hub,因此,我们必须为我们的 Kubernetes(K8s)集群配置一个 Private Docker Registry。
在本文中,我们将在 CentOS 7 上离线安装 Kubernetes(K8s)。
这里我们就不说明配置 Docker Registry的步骤,查看之前的教程在 CentOS 7 上配置私有 Docker 注册中心(Registry)
下载用于离线安装 Kubernetes (K8s) 的包/镜像
使用 ssh 作为 root 用户连接到 docker-online.onitroad.com。
添加Kubernetes(K8s)yum仓库如下:
[root@docker-online k8s]# cat > /etc/yum.repos.d/kubernetes.repo << EOF > [kubernetes] > name=Kubernetes > baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 > enabled=1 > gpgcheck=1 > repo_gpgcheck=1 > gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg > EOF
构建 yum 缓存。
[root@docker-online ~]# yum makecache fast
创建一个目录以下载所需的 Kubernetes (K8s) 包。
[root@docker-online ~]# mkdir ~/k8s [root@docker-online k8s]# cd ~/k8s
使用 yumdownloader 下载 Kubernetes (K8s) 包。
[root@docker-online k8s]# yumdownloader --resolve kubelet kubeadm kubectl
列出下载的文件。
[root@docker-online k8s]# ls 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm 548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm 5c6cb3beb5142fa21020e2116824ba77a2d1389a3321601ea53af5ceefe70ad1-kubectl-1.14.1-0.x86_64.rpm 9e1af74c18311f2f6f8460dbd1aa3e02911105bfd455416381e995d8172a0a01-kubeadm-1.14.1-0.x86_64.rpm conntrack-tools-1.4.4-4.el7.x86_64.rpm e1e8f430609698d7ec87642179ab57605925cb9aa48d406da97dedfb629bebf2-kubelet-1.14.1-0.x86_64.rpm libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm
从 Docker Hub 下载 Docker 镜像,根据 Kubernetes (K8s) 的要求进行节点初始化。
[root@docker-online ~]# docker pull k8s.gcr.io/kube-apiserver:v1.14.1 v1.14.1: Pulling from kube-apiserver 346aee5ea5bc: Pull complete 7f0e834d5a94: Pull complete Digest: sha256:bb3e3264bf74cc6929ec05b494d95b7aed9ee1e5c1a5c8e0693b0f89e2e7288e Status: Downloaded newer image for k8s.gcr.io/kube-apiserver:v1.14.1 [root@docker-online ~]# docker pull k8s.gcr.io/kube-controller-manager:v1.14.1 v1.14.1: Pulling from kube-controller-manager 346aee5ea5bc: Already exists f4db69ee8ade: Pull complete Digest: sha256:5279e0030094c0ef2ba183bd9627e91e74987477218396bd97a5e070df241df5 Status: Downloaded newer image for k8s.gcr.io/kube-controller-manager:v1.14.1 [root@docker-online ~]# docker pull k8s.gcr.io/kube-scheduler:v1.14.1 v1.14.1: Pulling from kube-scheduler 346aee5ea5bc: Already exists b88909b8f99f: Pull complete Digest: sha256:11af0ae34bc63cdc78b8bd3256dff1ba96bf2eee4849912047dee3e420b52f8f Status: Downloaded newer image for k8s.gcr.io/kube-scheduler:v1.14.1 [root@docker-online ~]# docker pull k8s.gcr.io/kube-proxy:v1.14.1 v1.14.1: Pulling from kube-proxy 346aee5ea5bc: Already exists 1e695dec1fee: Pull complete 100690d61cf6: Pull complete Digest: sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9 Status: Downloaded newer image for k8s.gcr.io/kube-proxy:v1.14.1 [root@docker-online ~]# docker pull k8s.gcr.io/pause:3.1 3.1: Pulling from pause 67ddbfb20a22: Pull complete Digest: sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea Status: Downloaded newer image for k8s.gcr.io/pause:3.1 [root@docker-online ~]# docker pull k8s.gcr.io/etcd:3.3.10 3.3.10: Pulling from etcd 90e01955edcd: Pull complete 6369547c492e: Pull complete bd2b173236d3: Pull complete Digest: sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 Status: Downloaded newer image for k8s.gcr.io/etcd:3.3.10 [root@docker-online ~]# docker pull k8s.gcr.io/coredns:1.3.1 1.3.1: Pulling from coredns e0daa8927b68: Pull complete 3928e47de029: Pull complete Digest: sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 Status: Downloaded newer image for k8s.gcr.io/coredns:1.3.1
列出 Docker 镜像。
[root@docker-online ~]# docker image ls -a REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.14.1 20a2d7035165 2 weeks ago 82.1MB k8s.gcr.io/kube-apiserver v1.14.1 cfaa4ad74c37 2 weeks ago 210MB k8s.gcr.io/kube-controller-manager v1.14.1 efb3887b411d 2 weeks ago 158MB k8s.gcr.io/kube-scheduler v1.14.1 8931473d5bdb 2 weeks ago 81.6MB k8s.gcr.io/coredns 1.3.1 eb516548c180 3 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 4 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 16 months ago 742kB
将 Kubernetes (K8s) 相关的 Docker 镜像导出到单独的 tar 文件。
[root@docker-online ~]# docker save k8s.gcr.io/kube-apiserver:v1.14.1 > ~/k8s/kube-apiserver.tar [root@docker-online ~]# docker save k8s.gcr.io/kube-controller-manager:v1.14.1 > ~/k8s/kube-controller-manager.tar [root@docker-online ~]# docker save k8s.gcr.io/kube-scheduler:v1.14.1 > ~/k8s/kube-scheduler.tar [root@docker-online ~]# docker save k8s.gcr.io/kube-proxy:v1.14.1 > ~/k8s/kube-proxy.tar [root@docker-online ~]# docker save k8s.gcr.io/pause:3.1 > ~/k8s/pause.tar [root@docker-online ~]# docker save k8s.gcr.io/etcd:3.3.10 > ~/k8s/etcd.tar [root@docker-online ~]# docker save k8s.gcr.io/coredns:1.3.1 > ~/k8s/coredns.tar
列出 tar 文件。
[root@docker-online ~]# ls ~/k8s/*.tar /root/k8s/coredns.tar /root/k8s/kube-proxy.tar /root/k8s/etcd.tar /root/k8s/kube-scheduler.tar /root/k8s/kube-apiserver.tar /root/k8s/pause.tar /root/k8s/kube-controller-manager.tar
下载flannel 网络脚本。
[root@docker-online ~]# cd ~/k8s [root@docker-online k8s]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
我们已经成功下载了 Kubernetes (K8s) 安装所需的所有文件。
将目录 ~/k8s 从 docker-online 转移到 docker-offline 。