K8S开启IPVS

1
2
3
4
5
6
7
8
[root@master1 ~]# cat /etc/modules-load.d/k8s.conf
#!/bin/bash
modprobe br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

设置SC为默认存储

1
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

K8S二进制安装遇到的网络问题

1
在签发apiserver证书时需要加上K8S集群SVC地址

K8S使用NFS作为SC时遇到的错误

1
2
必须要在所有节点安装nfs-client工具(nfs-utils)
不推荐集群使用NFS当作存储,测试即可

K8S安装度量组件(环境是K8S二进制集群)

报错

metrics x509: cannot validate certificate for 192.168.64.103 because it doesn’t cont

解决方法

在metrics-server的yaml添加如下参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,InternalDNS
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls # 添加这段

报错

[root@master ~]# kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
同时apiserver报错
Oct 26 17:21:01 master kube-apiserver[18236]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Oct 26 17:21:01 master kube-apiserver[18236]: I1026 17:21:01.182587 18236 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Oct 26 17:21:05 master kube-apiserver[18236]: E1026 17:21:05.182475 18236 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://172.12.118.65:443/apis/metrics.k8s.io/v1beta1: Get …

解决方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@master ~]# cat /lib/systemd/system/apiserver.service
[Unit]
Description=apiServer
[Service]
Type=simple
ExecStart=kube-apiserver \
--advertise-address=192.168.64.101 \
--allow-privileged=true \
--authorization-mode=Node,RBAC \
--client-ca-file=/etc/kubernetes/ca.crt \
--enable-admission-plugins=NodeRestriction \
--enable-bootstrap-token-auth=true \
--etcd-cafile=/etc/etcd/etcd-ca.crt \
--etcd-certfile=/etc/etcd/apiserver-etcd-client.crt \
--etcd-keyfile=/etc/etcd/apiserver-etcd-client.key \
--etcd-servers=https://master:2379 \
--kubelet-client-certificate=/etc/kubernetes/apiserver-kubelet-client.crt \
--kubelet-client-key=/etc/kubernetes/apiserver-kubelet-client.key \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--secure-port=6443 \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/etc/kubernetes/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/sa.key \
--service-cluster-ip-range=172.12.0.0/16 \
--tls-cert-file=/etc/kubernetes/apiserver.crt \
--tls-private-key-file=/etc/kubernetes/apiserver.key \
--proxy-client-cert-file=/etc/kubernetes/front-proxy-client.crt \
--proxy-client-key-file=/etc/kubernetes/front-proxy-client.key \
--requestheader-allowed-names=front-proxy-client \
--requestheader-client-ca-file=/etc/kubernetes/front-proxy-ca.crt \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--token-auth-file=/etc/kubernetes/token.csv \
--enable-bootstrap-token-auth=true \
--service-node-port-range=80-60000 \
--enable-aggregator-routing=true \ # 添加此选项
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
[Install]
WantedBy=multi-user.target

效果

1
2
3
4
5
6
7
8
9
10
[root@xiaowangc ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node01 72m 1% 1007Mi 13%
node02 82m 2% 1112Mi 14%
node03 73m 1% 1052Mi 13%
[root@xiaowangc ~]# kubectl top pod
NAME CPU(cores) MEMORY(bytes)
nfs-client-provisioner-5684cb8fbd-cl9zp 2m 15Mi
nfs-client-provisioner-5684cb8fbd-t72xg 1m 13Mi
[root@xiaowangc ~]#

重启控制器

1
kubectl rollout restart deploy/prom-deployment

设置K8S ROLES属性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1.xiaowangc.local Ready control-plane 10h v1.25.4
master2.xiaowangc.local Ready control-plane 9h v1.25.4
master3.xiaowangc.local Ready control-plane 9h v1.25.4
node1.xiaowangc.local Ready nodes 154m v1.25.4
node2.xiaowangc.local Ready nodes 146m v1.25.4
node3.xiaowangc.local Ready nodes 138m v1.25.4
node4.xiaowangc.local Ready nodes 123m v1.25.4
[root@master1 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master1.xiaowangc.local Ready control-plane 10h v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
master2.xiaowangc.local Ready control-plane 9h v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master2.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
master3.xiaowangc.local Ready control-plane 9h v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master3.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1.xiaowangc.local Ready nodes 155m v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/nodes=
node2.xiaowangc.local Ready nodes 147m v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/nodes=
node3.xiaowangc.local Ready nodes 139m v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/nodes=
node4.xiaowangc.local Ready nodes 125m v1.25.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node4.xiaowangc.local,kubernetes.io/os=linux,node-role.kubernetes.io/nodes=
1
[root@master1 ~]# kubectl label nodes 节点名字 node-role.kubernetes.io/ROLES属性名称=或-

Kubernetes的Service无法ping通

初步分析,在使用iptables转发时也是无法使用ping命令,因为ping基于ICMP ,ICMP不能在TCP/UDP上运行,因此没有TCP/UDP端口的概念。因此,无法在配置为侦听和trap的服务的端口上使用ping
iptable默认策略为拒绝任何icmp端口,除非你手动打开让其支持icmp协议才可以ping通。

集群如果使用的是iptables转发,那么可以尝试设置允许ICMP协议
集群如果使用的是IPVS转发,可能被iptables的规则所作用,可以尝试使用以下命令对其进行清空

1
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

同时也可以考虑让iptables转发IPVS所有流量

1
2
iptables -I INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -I FORWARD -j ACCEPT

集群是使用IPVS转发,在ping Service名称的时候时候无法解析,查看相关的Pod日志并无报错,同时宿主机报错IPVS:rr:TCP x.x.x.x:x - no destination available
使用iptables清空防火墙规则即可

1
iptables -F

kubernetes查看代理模式

1
2
3
4
curl 127.0.0.1:10249/proxyMode

[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
iptables