在主节点上配置 Kubelet 服务
使用 kubeadm 命令拉取配置 kubelet 服务所需的镜像。
[root@kubemaster-01 ~]# kubeadm config images pull [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.1 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.1 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.1 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.1 [config/images] Pulled k8s.gcr.io/pause:3.1 [config/images] Pulled k8s.gcr.io/etcd:3.3.10 [config/images] Pulled k8s.gcr.io/coredns:1.3.1
如下初始化和配置 kubelet 服务:
[root@kubemaster-01 ~]# kubeadm init [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster-01.onitroad.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.160] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster-01.onitroad.com localhost] and IPs [192.168.1.160 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster-01.onitroad.com localhost] and IPs [192.168.1.160 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 42.152638 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node kubemaster-01.onitroad.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster-01.onitroad.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: mm20xq.goxx7plwzrx75tv3 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.160:6443 --token mm20xq.goxx7plwzrx75tv3 \ --discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f
按照上述命令的建议执行以下命令。
[root@kubemaster-01 ~]# mkdir -p $HOME/.kube [root@kubemaster-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@kubemaster-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
启动并启用 Kubelet 服务。
[root@kubemaster-01 ~]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@kubemaster-01 ~]# systemctl start kubelet.service
配置环境
我们有两台 CentOS 7 服务器,他们的配置如下。
主机名: | kubemaster-01 |
IP 地址: | 192.168.1.160/24 |
CPU: | 3.4 GHz(2 核)* |
Memory: | 2 GB |
Storage: | 40 GB |
操作系统: | CentOS 7.6 |
Cluster 角色: | K8s Master |
Docker 版本: | 18.09.5 |
Kubernetes 版本: | 1.14.1 |
- 我们必须在每个节点上至少有 2 个内核才能安装 Kubernetes。
确保主机名在所有节点上都是可解析的。
为此,我们可以使用 DNS 服务器或者本地 DNS 解析器。
在 CentOS 7 上安装 Docker CE
我们将 Docker CE 配置为 Kubernetes CRI(容器运行时接口)。
Kubernetes CRI 的其他选择是 containerd、cri-o 和 frakti。
使用 ssh 以 root 用户登录到 Kubernetes master kubemaster-01.onitroad.com。
使用 yum 命令安装 Docker CE 准备工作包。
[root@kubemaster-01 ~]# yum install -y device-mapper-persistent-data lvm2 yum-utils
添加Docker yum存储库如下:
[root@kubemaster-01 ~]# yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
为 Docker 存储库构建 yum 缓存。
[root@kubemaster-01 ~]# yum makecache fast
使用 yum 命令安装 Docker CE。
[root@kubemaster-01 ~]# yum install -y docker-ce
配置 Docker 服务以供 Kubernetes 使用。
[root@kubemaster-01 ~]# mkdir /etc/docker [root@kubemaster-01 ~]# cat > /etc/docker/daemon.json << EOF > { > "exec-opts": ["native.cgroupdriver=systemd"], > "log-driver": "json-file", > "log-opts": { > "max-size": "100m" > }, > "storage-driver": "overlay2", > "storage-opts": [ > "overlay2.override_kernel_check=true" > ] > } > EOF
启动并启用 Docker 服务。
[root@kubemaster-01 ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@kubemaster-01 ~]# systemctl start docker.service
Docker CE 已安装。
重复上述步骤,在 kubenode-01.onitroad.com 上安装 Docker CE。
Kubernetes 或者 k8s 是一个开源容器编排系统,用于跨主机集群的自动化应用程序部署、管理和扩展。
Kubernetes 最初由 Google 开发,但现在由 Cloud Native Computing Foundation 维护。
Kubernetes 需要一个容器运行时接口 (CRI) 来进行编排。
Kubernetes 支持不同的 CRI,包括 Docker、containerd 和 cri-o。
在本文中,我们将在 CentOS 7 上安装带有 Docker CE 的两节点 Kubernetes/K8s 集群。
在 CentOS 7 上安装 Kubernetes
根据 Kubernetes 的要求设置以下内核参数。
[root@kubemaster-01 ~]# cat > /etc/sysctl.d/kubernetes.conf << EOF > net.ipv4.ip_forward = 1 > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF
重新加载内核参数配置文件。
[root@kubemaster-01 ~]# modprobe br_netfilter [root@kubemaster-01 ~]# sysctl --system
关闭 Swap for Kubernetes 安装。
[root@kubemaster-01 ~]# swapoff -a [root@kubemaster-01 ~]# sed -e '/swap/s/^/#/g' -i /etc/fstab
Kubernetes 在主节点上使用以下服务端口。
端口 | 协议 | 目的 |
---|---|---|
6443 | TCP | Kubernetes API 服务器 |
2379-2380 | TCP | etcd 服务器客户端 API |
10250 | TCP | Kubelet API |
10251 | TCP | kube 调度器 |
10252 | TCP | kube-controller-manager |
在 Linux 防火墙中允许 kubemaster-01.onitroad.com 上的 Kubernetes 服务端口。
[root@kubemaster-01 ~]# firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp success [root@kubemaster-01 ~]# firewall-cmd --reload success
Kubernetes 在 Worker 节点上使用以下服务端口。
端口 | 协议 | 目的 |
---|---|---|
10250 | TCP | Kubelet API |
30000-32767 | TCP | 节点端口服务 |
在 Linux 防火墙中允许 kubenode-01.onitroad.com 上的 Kubernetes 服务端口。
[root@kubenode-01 ~]# firewall-cmd --permanent --add-port={10250,30000-32767}/tcp success [root@kubenode-01 ~]# firewall-cmd --reload success
使用以下命令将 SELinux 切换到 Permissive 模式。
[root@kubemaster-01 ~]# setenforce 0 [root@kubemaster-01 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
如下添加 Kubernetes yum 存储库。
[root@kubemaster-01 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF > [kubernetes] > name=Kubernetes > baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 > enabled=1 > gpgcheck=1 > repo_gpgcheck=1 > gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg > EOF
为 kubernetes 存储库构建 yum 缓存。
[root@kubemaster-01 ~]# yum makecache fast
使用 yum 命令安装 Kubernetes 包。
[root@kubemaster-01 ~]# yum install -y kubelet kubeadm kubectl
要启用 kubectl 命令的自动完成,我们必须执行 kubectl 命令本身提供的脚本。
我们必须确保安装了 bash-completion 包。
[root@kubemaster-01 ~]# source <(kubectl completion bash)
为了使其持久化,我们必须在 Bash Completion 目录中添加脚本。
[root@kubemaster-01 ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
Kubernetes 已安装。
重复以上步骤在 kubenode-01.onitroad.com 上安装 Kubernetes。
在 CentOS 7 上向 Kubernetes 集群添加节点
执行 Kubernetes 集群中节点的状态。
[root@kubemaster-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster-01.onitroad.com NotReady master 50m v1.14.1
通过执行 kubeadm init 命令提供的命令将另一个节点添加到 Kubernetes 集群。
[root@kubenode-01 ~]# kubeadm join 192.168.1.160:6443 --token mm20xq.goxx7plwzrx75tv3 \ > --discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果遇到网络错误,则必须在所有节点上安装非默认网络,如 Flannel。
[root@kubemaster-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged configured clusterrole.rbac.authorization.k8s.io/flannel unchanged clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged serviceaccount/flannel unchanged configmap/kube-flannel-cfg unchanged daemonset.extensions/kube-flannel-ds-amd64 unchanged daemonset.extensions/kube-flannel-ds-arm64 unchanged daemonset.extensions/kube-flannel-ds-arm unchanged daemonset.extensions/kube-flannel-ds-ppc64le unchanged daemonset.extensions/kube-flannel-ds-s390x unchanged
再次检查 Kubernetes 集群中节点的状态。
[root@kubemaster-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster-01.onitroad.com Ready master 45m v1.14.1 kubenode-01.onitroad.com Ready <none> 43m v1.14.1
我们已经在 CentOS 7 上成功安装了一个带有 Docker CE 的两节点 Kubernetes 集群。