Kubernetes集群手动部署(二)

前言

这里我用了三台机器,一台master,两台node,先从小的开始吧,一步一步的搭建熟悉之后,我们再来搭建高可用的kubernetes架构。Master和Node上都需要安装docker服务

架构图

inGu5Q.png

实验环境:

1
Centos7.2 minimal 64位

服务器IP:

1
2
3
Master:	192.168.1.89
Node: 192.168.1.90
Node1: 192.168.1.91

主机名:

1
2
3
Master:	master
Node: node
Node1: node1

软件版本:

1
2
3
4
k8s组件版本:v1.11.3
docker版本:ce-18.03.1
etcd版本:v3.3.9
flannel版本:0.7.1-4

Master节点要安装的组件有:

1
2
3
4
5
6
7
8
1、docker
2、etcd
3、kube-apiserver
4、kubectl
5、kube-scheduler
6、kube-controller-manager
7、kubelet
8、kube-proxy

Node节点要安装的组件有:

1
2
3
4
5
1、Calico/Flannel/Open vSwitch/Romana/Weave等等(网络解决方案,k8s有好几个网络解决方案)
官方说明文章:https://kubernetes.io/docs/setup/scratch/#network
2、docker
3、kubelet
4、kube-proxy

网上有说要关闭swap分区的操作,否则报错,由于我的主机没有设置swap分区,我也不清楚如果不关闭swap会不会报错,所以这里我就不写上关闭swap分区的操作了

A、Master部署

一、环境准备

环境准备需要master和node都操作

1、设置主机名

master上操作:

1
2
hostnamectl set-hostname master
bash

node上操作:

1
2
hostnamectl set-hostname node
bash

node1上操作:

1
2
hostnamectl set-hostname node1
bash

2、设置hosts

master和node同样的操作

1
2
3
echo "192.168.1.89 master" >> /etc/hosts
echo "192.168.1.90 node" >> /etc/hosts
echo "192.168.1.91 node1" >> /etc/hosts

3、设置时间同步

master和node同样的操作

1
2
3
4
5
6
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
ntpdate -u cn.pool.ntp.org
hwclock
timedatectl set-timezone Asia/Shanghai

二、生成证书

(一)安全模式

Kubernetes主要有两种安全模式选择,官方说明地址:

1
https://kubernetes.io/docs/setup/scratch/#security-models
1、使用http访问apiserver
1
2
1)使用防火墙来提高安全性
2)这更容易设置
2、使用https访问apiserver
1
2
3
1)使用带有证书的https和用户凭据
2)这是推荐的方法
3)配置证书可能会很棘手

为了保证线上集群的信任和数据安全,节点之间使用TLS证书来互相通信,如果需要使用https方法,则需要准备证书和用户凭据

(二)证书

生成证书有几种方式:

1
2
3
1、easyrsa
2、openssl
3、cfssl

官方说明地址:

1
https://kubernetes.io/docs/concepts/cluster-administration/certificates/

使用证书的组件如下:

1
2
3
4
5
6
1、etcd							#使用ca.pem、etcd-key.pem、etcd.pem
2、kube-apiserver: #使用ca.pem、api-key.pem、api.pem
3、kubelet #使用ca.pem
4、kube-proxy #使用ca.pem、proxy-key.pem、proxy.pem
5、kubectl #使用ca.pem、admin-key.pem、admin.pem
6、kube-controller-manager #使用ca.pem、ca-key.pem

(三)安装证书生成工具cfssl

上边给出了官方的安装地址,你也可以选择别的工具,这里我选择cfssl

注意:

证书生成都在master上执行,证书只需要创建一次即可,以后在向集群中添加新节点时只要将/etc/kubernetes/目录下的证书拷贝到新节点上就行。

下载、解压并准备命令行工具

1
2
3
4
5
6
7
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo
chmod +x cfssl
chmod +x cfssljson
chmod +x cfssl-certinfo
mv cfssl* /usr/bin/

(四)创建CA根证书和密钥

1、创建目录来保存证书文件
1
mkdir -p /root/ssl
2、生成默认的cfssl配置文件
1
2
3
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
1
2
3
4
[root@master ssl]# ll
total 8
-rw-r--r--. 1 root root 567 Sep 17 09:58 config.json
-rw-r--r--. 1 root root 287 Sep 17 09:58 csr.json
3、根据上边生成的config.json文件的格式创建用于生成CA证书的ca-config.json文件

(有效期我设置了87600h,10年)

1
vim ca-config.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}

参数说明:

1
2
3
4
1)ca-config.json:可以定义多个profiles,分别制定不同的过期时间、使用场景等参数,后续在签名证书时使用某个peofile
2)signing:表示该证书可用于签名其它证书
3)server auth:表示client可以用该CA对server提供的证书进行验证
4)client auth:表示server用该CA对client提供的证书进行验证
4、创建CA证书签发请求ca-csr.json文件
1
vim ca-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"CN": "ca",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
5、生成CA密钥(ca-key.pem)和证书(ca.pem)
1
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
1
2
3
4
5
6
[root@master ssl]# ll ca*
-rw-r--r--. 1 root root 285 Sep 17 10:41 ca-config.json
-rw-r--r--. 1 root root 1009 Sep 17 10:41 ca.csr
-rw-r--r--. 1 root root 194 Sep 17 10:41 ca-csr.json
-rw-------. 1 root root 1679 Sep 17 10:41 ca-key.pem
-rw-r--r--. 1 root root 1375 Sep 17 10:41 ca.pem

(五)创建kube-apiserver证书和密钥

1、创建kube-apiserver证书签发请求的csr文件
1
2
cd /root/ssl/
vim apiserver-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"CN": "apiserver",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}

上边的:

1
2
3
localhost字段代表Master_IP
192.168.1.89代表Master_Cluster_IP
如果有多个master,都写上master的IP即可
2、生成kube-apiserver证书和密钥
1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
1
2
3
4
5
[root@master ssl]# ll apiserver*
-rw-r--r--. 1 root root 1257 Sep 17 11:32 apiserver.csr
-rw-r--r--. 1 root root 422 Sep 17 11:28 apiserver-csr.json
-rw-------. 1 root root 1675 Sep 17 11:32 apiserver-key.pem
-rw-r--r--. 1 root root 1635 Sep 17 11:32 apiserver.pem

(六)创建ETCD证书和密钥

1、创建ETCD证书签发请求的csr文件
1
2
cd /root/ssl/
vim etcd-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
2、生成etcd证书和密钥
1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
1
2
3
4
5
[root@master ssl]# ll etcd*
-rw-r--r--. 1 root root 1257 Sep 17 10:55 etcd.csr
-rw-r--r--. 1 root root 421 Sep 17 10:55 etcd-csr.json
-rw-------. 1 root root 1679 Sep 17 10:55 etcd-key.pem
-rw-r--r--. 1 root root 1635 Sep 17 10:55 etcd.pem

(七)创建Kubectl证书和密钥

1、创建Kubectl证书签发请求的csr文件
1
2
cd /root/ssl/
vim admin-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"CN": "kubectl",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "system:masters",
"OU": "K8S-Security"
}]
}
2、生成kubectl证书和密钥
1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
1
2
3
4
5
[root@master ssl]# ll admin*
-rw-r--r--. 1 root root 1017 Sep 17 11:41 admin.csr
-rw-r--r--. 1 root root 217 Sep 17 11:40 admin-csr.json
-rw-------. 1 root root 1675 Sep 17 11:41 admin-key.pem
-rw-r--r--. 1 root root 1419 Sep 17 11:41 admin.pem

(七)创建Kube-proxy证书和密钥

1、创建Kube-proxy证书签发请求的csr文件
1
2
cd /root/ssl/
vim proxy-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"CN": "proxy",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
2、生成kube-proxy证书和密钥
1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy
1
2
3
4
5
[root@master ssl]# ll proxy*
-rw-r--r--. 1 root root 1005 Sep 17 11:46 proxy.csr
-rw-r--r--. 1 root root 206 Sep 17 11:45 proxy-csr.json
-rw-------. 1 root root 1679 Sep 17 11:46 proxy-key.pem
-rw-r--r--. 1 root root 1403 Sep 17 11:46 proxy.pem

(八)整理证书和密钥文件备用

将生成的证书和密钥文件(后缀为.pem的文件)拷贝到/etc/kubernetes/ssl目录下备用

1
2
mkdir -p /etc/kubernetes/ssl
cp /root/ssl/*.pem /etc/kubernetes/ssl/

三、安装Kubectl命令行工具

官方安装文章:

1
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl

可以按照官方的安装方式,这里我选择直接下载二进制的方式,其实都是一样的,如下

1、下载kubectl二进制包

inGMCj.png

下载地址:

1
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113

命令行下载:

1
wget https://dl.k8s.io/v1.11.3/kubernetes-client-linux-amd64.tar.gz
2、解包安装
1
2
tar -zxvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kubectl /usr/bin/

四、创建kubectl的kubeconfig文件

为了让kubectl能够查找和访问Kubernetes集群,它需要一个kubeconfig文件,kubeconfig文件用于组织关于集群、用户、命名空间和认证机制的信息。命令行工具kubectl从kubeconfig文件中得到它要选择的集群以及跟集群API Server交互的信息。

官方设置文章:

1
https://k8smeetup.github.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

官方的文章里是单独创建一个目录,然后写好集群、用户和上下文。下边我是直接用命令生成,生成的配置信息默认保存到~/.kube/config文件中,两种方法达到的效果一样,随你们选择

1、设置集群参数

(–server后边跟的是kube-apiserver的地址和端口,虽然这里我们还没有部署apiserver)

1
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.89:6443
2、设置客户端认证参数
1
kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem
3、设置上下文参数
1
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
4、设置默认上下文
1
kubectl config use-context kubernetes
5、查看
1
[root@master ~]# kubectl config view
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.89:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

五、创建TLS Bootstrapping Token

官方地址:

1
https://kubernetes.io/cn/docs/admin/kubelet-tls-bootstrapping/
1、Token可以是任意的包含128bit的字符串,可以使用安全的随机数发生器生成:
1
2
cd /root/
head -c 16 /dev/urandom | od -An -t x | tr -d ' '

inG1vq.png

2、将生成的Token写入到以下文件
1
vim token.csv
1
76b154e3c04b6a0f926579338b88d177,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
3、将token.csv复制到/etc/kubernetes/下备用
1
cp token.csv /etc/kubernetes/

如果后续重新生成了Token,则需要:

1
2
3
4
1、更新token.csv文件,分发到所有机器 (master和node)的/etc/kubernetes/目录下,分发到node节点上非必需;
2、重新生成bootstrap.kubeconfig文件,分发到所有node机器的/etc/kubernetes/目录下;
3、重启kube-apiserver和kubelet进程;
4、重新approve kubelet的csr请求;

六、创建kubelet bootstrapping的kubeconfig文件

1
cd /etc/kubernetes/
1、设置集群参数
1
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.89:6443 --kubeconfig=bootstrap.kubeconfig
2、设置客户端认证参数(–token参数为上边生成的那一串字符)
1
kubectl config set-credentials kubelet-bootstrap --token=76b154e3c04b6a0f926579338b88d177 --kubeconfig=bootstrap.kubeconfig
3、设置上下文参数
1
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
4、设置默认上下文
1
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

注意:

1
2
1、--embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
2、设置客户端认证参数时没有指定秘钥和证书,后续由kube-apiserver自动生成;

七、创建kube-proxy的kubeconfig文件

1
cd /etc/kubernetes/
1、设置集群参数
1
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem  --embed-certs=true --server=https://192.168.1.89:6443 --kubeconfig=kube-proxy.kubeconfig
2、设置客户端认证参数
1
kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/proxy.pem --client-key=/etc/kubernetes/ssl/proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
3、设置上下文参数
1
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
4、设置默认上下文
1
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

注意:

设置集群参数和客户端认证参数时–embed-certs都为true,这会将certificate-authority、client-certificate和client-key指向的证书文件内容写入到生成的kube-proxy.kubeconfig文件中

八、安装ETCD

yum或者二进制包安装都可以,这里我选择二进制包安装的方式,yum安装的话直接yum -y install etcd即可,也可以源码安装,具体可以参考这篇文章

1
https://blog.csdn.net/varyall/article/details/79128181

ETCD官网下载地址:

1
https://github.com/etcd-io/etcd/releases
1、把下载好的二进制包上传到服务器
2、解压
1
tar -zxvf etcd-v3.3.9-linux-amd64.tar.gz
3、mv到/usr/local/
1
mv etcd-v3.3.9-linux-amd64 /usr/local/etcd
4、软链接
1
ln -s /usr/local/etcd/etcd* /usr/local/bin/
5、给执行权限
1
chmod +x /usr/local/bin/etcd*
6、查看版本
1
[root@master ~]# etcd --version
1
2
3
4
etcd Version: 3.3.9
Git SHA: fca8add78
Go Version: go1.10.3
Go OS/Arch: linux/amd64
7、把etcd加入系统服务,开机自启
1
vim /usr/lib/systemd/system/etcd.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS} \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster ${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
8、创建etcd配置文件
1
2
mkdir -p /etc/etcd
vim /etc/etcd/etcd.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# [member]
ETCD_NAME=etcd
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.1.89:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.89:2379,https://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.89:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd=https://192.168.1.89:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.89:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
9、启动etcd
1
2
3
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

inGNaF.png

10、防火墙开启2379、2380端口
1
2
3
firewall-cmd --add-port=2379/tcp --permanent
firewall-cmd --add-port=2380/tcp --permanent
systemctl restart firewalld
11、检查etcd端口
1
[root@master ~]# netstat -luntp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 192.168.1.89:2379 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 192.168.1.89:2380 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1299/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2456/master
tcp6 0 0 :::22 :::* LISTEN 1299/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2456/master
udp 0 0 192.168.1.89:123 0.0.0.0:* 2668/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 2668/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 2668/ntpd
udp6 0 0 fe80::20c:29ff:fec7:123 :::* 2668/ntpd
udp6 0 0 ::1:123 :::* 2668/ntpd
udp6 0 0 :::123 :::* 2668/ntpd
12、检查etcd是否可以使用
1
2
[root@master ~]# export ETCDCTL_API=3
[root@master ~]# etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem --endpoints=https://192.168.1.89:2379 endpoint health
1
https://192.168.1.89:2379 is healthy: successfully committed proposal: took = 650.006µs

inGrKx.png

九、安装docker

这应该是全文最简单的安装步骤了,之前也有写过docker的安装方法,本来不想再叙述的,想了想,还是写一写吧,要不然你们重新去翻一遍之前的文章也不舒服。这里我使用yum安装。

1、安装依赖
1
yum install -y yum-utils device-mapper-persistent-data lvm2
2、获取docker的yum源
1
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、列出docker-ce版本
1
yum list docker-ce --showduplicates | sort -r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
* updates: mirrors.163.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
* epel: mirror01.idc.hinet.net
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.163.com
Available Packages
4、安装想要的版本
1
yum -y install docker-ce-18.03.1.ce-1.el7.centos
5、把root用户加入到docker用户组
1
gpasswd -a root docker
6、启动docker
1
systemctl start docker
7、开机自启
1
systemctl enable docker

十、安装kubernetes

1、下载地址
1
https://github.com/kubernetes/kubernetes/releases/tag/v1.11.3
2、选择CHANGELOG-1.11.md

inGsr6.png

inGcVO.png

3、下载

上图中kubernetes.tar.gz内包含了kubernetes的服务程序文件、文档和示例,而kubernetes-src.tar.gz内则包含了全部源代码。这里我选择了下边的Server Binaries中的文件,其中包含了kubernetes需要运行的全部服务程序文件

inG2Ie.png

也可以参考官方的说明来做:

1
https://kubernetes.io/docs/setup/scratch/#downloading-and-extracting-kubernetes-binaries

命令行下载:

1
wget https://dl.k8s.io/v1.11.3/kubernetes-server-linux-amd64.tar.gz
4、配置kubernetes
1
2
3
4
5
6
7
8
tar -zxvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes /usr/local/
ln -s /usr/local/kubernetes/server/bin/hyperkube /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-apiserver /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-scheduler /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kubelet /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-controller-manager /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/bin/

十一、把kube-apiserver加入系统服务,开机自启

1、创建服务文件,开机自启
1
vim /usr/lib/systemd/system/kube-apiserver.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[Unit]
Description=Kubernetes API Service
After=network.target
After=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
2、创建公共配置文件
1
vim /etc/kubernetes/config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://192.168.1.89:8080"

把IP地址改为你自己的IP地址,该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用

3、创建kube-apiserver的配置文件
1
vim /etc/kubernetes/apiserver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --insecure-bind-address=0.0.0.0 --bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.1.89:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \
--endpoint-reconciler-type=lease \
--runtime-config=batch/v2alpha1=true \
--anonymous-auth=false \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-quorum-read=true \
--storage-backend=etcd3 \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
--etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
--enable-swagger-ui=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-audit/audit.log \
--event-ttl=1h "

把IP地址改为你的,证书和密钥路径改为你的即可

4、启动kube-apiserver
1
2
3
4
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

inGfGd.png

5、防火墙开启8080、6443端口
1
2
3
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --add-port=6443/tcp --permanent
systemctl restart firewalld

十二、把kube-controller-manager加入系统服务,开机自启

1、创建服务文件,开机自启
1
vim /usr/lib/systemd/system/kube-controller-manager.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=Kubernetes Controller Manager

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
2、创建kube-controller-manager的配置文件
1
vim /etc/kubernetes/controller-manager
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--leader-elect=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=60s"
3、启动kube-controller-manager
1
2
3
4
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

inGhRA.png

十三、把kube-scheduler加入系统服务,开机自启

1、创建服务文件,开机自启
1
vim /usr/lib/systemd/system/kube-scheduler.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[Unit]
Description=Kubernetes Scheduler Plugin

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
2、创建kube-scheduler的配置文件
1
vim /etc/kubernetes/scheduler
1
2
3
4
5
6
7
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
3、启动kube-scheduler
1
2
3
4
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

inGIMt.png

提示:kube-apiserver是主要服务,如果apiserver启动失败其它的也会失败

十四、验证Master节点功能

1
[root@master ~]# kubectl get cs
1
2
3
4
NAME                 STATUS    MESSAGE             ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

没有异常即为Master节点安装成功!!!!

好了,到此master节点已经全部安装完成,并成功运行,接下来开始到node节点的安装了

B、Node部署

一、复制在Master上分发的证书、密钥、配置文件等等

1、先登录master节点,将之前生成的/root/.kube/config文件复制到/etc/kubernetes/目录下,并命名为kubelet.kubeconfig备用
1
cp /root/.kube/config /etc/kubernetes/kubelet.kubeconfig
2、将master节点上的/etc/kubernetes/目录复制到node节点上,目录同样是/etc/kubernetes/

二、Node节点安装docker

1、安装依赖
1
yum install -y yum-utils device-mapper-persistent-data lvm2
2、获取docker的yum源
1
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、列出docker-ce版本
1
yum list docker-ce --showduplicates | sort -r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
* updates: mirrors.163.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
* epel: mirror01.idc.hinet.net
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.163.com
Available Packages
4、安装想要的版本
1
yum -y install docker-ce-18.03.1.ce-1.el7.centos
5、把root用户加入到docker用户组
1
gpasswd -a root docker
6、启动docker
1
systemctl start docker
7、开机自启
1
systemctl enable docker
8、如果要使用flannel网络,在启动docker的时候,需要添加–bip参数,修改docker的systemd启动文件
1
vim /usr/lib/systemd/system/docker.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service flanneld.service
Wants=network-online.target
Requires=flanneld.service

[Service]
Type=notify

EnvironmentFile=/run/flannel/docker
EnvironmentFile=/run/flannel/subnet.env

# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

ExecStart=/usr/bin/dockerd \
$DOCKER_OPT_BIP \
$DOCKER_OPT_IPMASQ \
$DOCKER_OPT_MTU \
$DOCKER_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

三、Node节点安装flannel网络插件

所有的node节点都需要安装网络插件才能让所有的pod加入到同一个局域网中,这里我选择的是flannel网络插件,当然,你们也可以选择别的插件,去官网学习学习就可以

建议直接使用yum安装flannel,默认安装的版本的是0.7.1的,除非对版本有特殊需求,可以二进制安装,可以参考这篇文章:http://blog.51cto.com/tryingstuff/2121707

1
[root@node ~]# yum list flannel | sort -r
1
2
3
4
5
6
7
8
 * updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
flannel.x86_64 0.7.1-4.el7 extras
* extras: mirrors.aliyun.com
* epel: mirrors.aliyun.com
* base: mirrors.shu.edu.cn
Available Packages
1、安装flannel
1
yum -y install flannel
2、修改服务启动文件
1
vim /usr/lib/systemd/system/flanneld.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} -etcd-prefix=${FLANNEL_ETCD_PREFIX} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service

参数说明:

1
2
3
4
1)Flannel网络必须在宿主机网络能对外(其它node节点)正常通信的情况下启动才有意义,所以这里定义 After=network.target
2)只有当Flannel 网络启动之后,才能创建一个与其它节点不会冲突的网络,而docker的网络需要和fannel 网络相同才能保证跨主机通信,所以docker必须要在flannel网络创建后才能启动,这里定义 Before=docker.service
3)在 /etc/kubernetes/flanneld 文件中,我们会指定flannel相关的启动参数,这里由于需要指定etcd集群,会有一部分不通用的参数,所以单独定义。
4)启动之后,我们需要使用fannel自带的脚本/usr/libexec/flannel/mk-docker-opts.sh,创建一个docker使用的启动参数,这个参数就包含配置docker0网卡的网段。
3、修改配置文件
1
vim /etc/sysconfig/flanneld
1
2
3
4
5
6
7
8
9
10
11
12
# Flanneld configuration options  

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.1.89:2379" ##etcd集群的地址,多个用英文逗号隔开


# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem"
4、在etcd中创建网络配置(我们的etcd是在master节点上,所以这里要去master节点执行命令)
1
etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem mkdir /kube-centos/network
1
etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem mk /kube-centos/network/config '{"Network":"192.168.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'
5、在node节点上启动flannel
1
2
3
4
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

inGosP.png

6、重启node节点上的docker
1
2
systemctl restart docker
systemctl status docker

inGbdS.png

四、查看etcd中的flannel内容

1
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem ls /kube-centos/network/subnets
1
/kube-centos/network/subnets/192.168.23.0-24
1
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem get /kube-centos/network/config
1
{"Network":"192.168.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}
1
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem get /kube-centos/network/subnets/192.168.23.0-24
1
{"PublicIP":"192.168.1.90","BackendType":"host-gw"}

注意:每台node节点上都要安装flannel,master节点上可以不装

五、Node节点安装kubelet

1、下载Kubernetes版本的二进制包

之前我们已经下载过kubernetes-server-linux-amd64.tar.gz这个包,这个包里包含了所有Kubernetes需要运行的组件,我们把这个包上传到服务器,安装我们需要的组件即可

2、解包安装
1
2
3
tar -zxvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes /usr/local/
ln -s /usr/local/kubernetes/server/bin/kubelet /usr/bin/
3、创建kubelet的service启动文件
1
vim /usr/lib/systemd/system/kubelet.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[Unit]
Description=Kubernetes Kubelet Server
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
4、创建kubelet的配置文件
1
vim /etc/kubernetes/kubelet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
#
## The port for the info server to serve on
KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.1.90"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER=""
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
5、启动kubelet
1
2
3
4
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

inGOiQ.png

6、防火墙开启10250端口
1
2
firewall-cmd --add-port=10250/tcp --permanent
systemctl restart firewalld

六、Node节点安装kube-proxy

1、创建软链接

上一步已经解包安装了所有文件,这里做好软链接即可

1
ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/bin/
2、创建kube-proxy的service启动文件
1
vim /usr/lib/systemd/system/kube-proxy.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Unit]
Description=Kubernetes Kube-Proxy Server
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
3、创建kube-proxy的配置文件
1
vim /etc/kubernetes/proxy
1
2
3
4
5
6
7
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=0.0.0.0 --hostname-override=192.168.1.90 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --masquerade-all"
4、启动kube-proxy
1
2
3
4
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

inGzMq.png

C、Node1部署

这里省略了,如果需要创建多个node节点,直接按照上面的Node部署流程重复即可

D、验证

1、到master节点上验证安装好的两个node节点
1
[root@master ~]# kubectl get nodes
1
2
3
NAME           STATUS    ROLES     AGE       VERSION
192.168.1.90 Ready <none> 1h v1.11.3
192.168.1.91 Ready <none> 2m v1.11.3

状态为Ready即为节点加入成功!!

2、创建一个nginx的service试一下集群的功能是否可用
1
# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx --port=80
1
deployment.apps/nginx created
1
# kubectl expose deployment nginx --type=NodePort --name=examples-service
1
service/examples-service exposed
1
# kubectl describe svc examples-service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Name:                     examples-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.0.0.219
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 48743/TCP
Endpoints: 192.168.38.2:80,192.168.42.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

访问任何一个Node节点的IP+端口,都可以访问到这个nginx的页面

iKdYOH.png

iKdwkt.png

3、删除一个pod,看看会不会自动创建新的pod以保证集群的高可用

查看现有pod

1
kubectl get pod
1
2
3
NAME                    READY     STATUS    RESTARTS   AGE
nginx-8f8f7665f-ffvsm 1/1 Running 0 7m
nginx-8f8f7665f-t2ntn 1/1 Running 0 7m

删除其中一个

1
kubectl delete pod nginx-8f8f7665f-ffvsm
1
pod "nginx-8f8f7665f-ffvsm" deleted

再次来查看

1
kubectl get pod
1
2
3
NAME                    READY     STATUS    RESTARTS   AGE
nginx-8f8f7665f-94c2v 1/1 Running 0 24s
nginx-8f8f7665f-t2ntn 1/1 Running 0 9m

可以看到,还是两个在Running,只不过NAME已经改变了,这期间,我们一直访问nginx页面是没有任何错误提示的。

博主QQ:1012405802
技术交流QQ群:830339411
版权声明:网站内容有原创和转载,如有侵权,请联系删除,谢谢!!
感谢打赏,93bok因你们而精彩!!(支付宝支持花呗)
0%