目 录CONTENT

文章目录

K8s 工作负载控制器的应用

ZiChen D
2021-12-25 / 0 评论 / 1 点赞 / 461 阅读 / 20,710 字 / 正在检测是否收录...

nodeName:节点名称

顾名思义,指定节点名称,用于将Pod调度到指定的Node上,不经过调度器

资源清单编写示例

apiVersion: v1
kind: Pod
metadata: 
  name: test
  labels: 
    app: test
spec: 
  nodeName: node1.example.com
  containers: 
  - name: test
    image: busybox

常用工作负载控制器

  • 工作负载控制器是什么
  • Deployment
  • DaemonSet
  • Job
  • Cronjob

工作负载控制器是什么

工作负载控制器(Workload Controllers)是k8s的一个抽象概念,用于更高级层次对象,部署和管理Pod

常用工作负载控制器:
Deployment:无状态应用部署
StatefulSet:有状态应用部署
DaemonSet:确保所有Node运行同一个Pod
Job:一次性人物
Cronjob:定时任务

控制器的作用:
管理Pod对象
使用标签与Pod关联
控制器实现Pod的运维,例如滚动更新、伸缩、副本管理、维护Pod状态等。

Deployment

介绍

Deployment的功能:
管理Pod和ReplicaSet
具有上线部署、副本设定、滚动升级、回滚等功能
提供声明式更新。例如只更新一个新的Image

应用场景:网站、API、微服务

使用流程

部署

第一步:部署镜像
kubectl apply -f xxx.yaml
kubectl create deployment.web --image=nginx:1.15

示例:

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: test
  namespace: default
spec: 
  replicas: 3		# Pod副本预期数量
  selector: 
    matchLabels: 
      app: test
  template: 
    metadata: 
      labels: 
        app: test	# Pod副本的标签
    spec: 
      containers: 
      - name: test
        image: busybox
        command: 
        - "/bin/bash"
        - "-c"
        - "sleep 9000"
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/test created
[root@master ~]# kubectl get pods
NAME                  READY   STATUS              RESTARTS   AGE
test-9d85dccb-7f7g9   0/1     ContainerCreating   0          10s
test-9d85dccb-drfc5   0/1     ContainerCreating   0          10s
test-9d85dccb-zkbqb   0/1     ContainerCreating   0          10s

滚动升级

第二步:应用升级(更新镜像的三种方式)

  • kubectl apply -f xxx.yaml
  • kubectl set image deployment/web nginx=nginx:1.16
  • kubctl edit deployment/web

第一种方法推荐使用,而第三种方法不推荐使用,防止误删

滚动升级:K8s对Pod升级的默认策略,通过使用新版本Pod逐步更新旧版本Pod,实现零停机发布,用户无感知

例如图中三个镜像,在更新时,先停掉一个,控制器根据设置发现少了一个,则会启动一个新的,而新的我们已经在文件里将镜像版本进行更新,所以就会启动一个新版本的pod,依此类推,直到更新完成

滚动更新策略:
依据第一种,使用资源清单方式(yaml文件格式)进行更新示例

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
spec: 
  replicas: 3				//Pod预期数量
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1	//根据V1版本先创建Pod
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl apply -f test.yml 
deployment.apps/apache created
[root@master ~]# kubectl get pod
NAME                      READY   STATUS              RESTARTS   AGE
apache-7f6fd56575-mpt2w   1/1     Running             0          8s
apache-7f6fd56575-rhnk7   0/1     ContainerCreating   0          8s
apache-7f6fd56575-w8t9l   1/1     Running             0          8s

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v2		//修改版本为v2
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl apply -f test.yml 
deployment.apps/apache configured
[root@master ~]# kubectl get pod		//更新过程
NAME                      READY   STATUS              RESTARTS   AGE
apache-599c45c854-n2kvv   0/1     ContainerCreating   0          24s
apache-599c45c854-wbpj8   1/1     Running             0          64s
apache-7f6fd56575-mpt2w   1/1     Running             0          2m46s
apache-7f6fd56575-rhnk7   1/1     Terminating         0          2m46s
apache-7f6fd56575-w8t9l   1/1     Running             0          2m46s

[root@master ~]# kubectl get pod		//更新完成
NAME                      READY   STATUS    RESTARTS   AGE
apache-599c45c854-jp4tr   1/1     Running   0          86s
apache-599c45c854-n2kvv   1/1     Running   0          2m46s
apache-599c45c854-wbpj8   1/1     Running   0          3m26s

根据过程可以看到,Pod是停一个启动两个,那么在实际应用中不会只有这么两三个Pod,所以
提问:有没有办法可以控制Pod的更新数量?

答案是有的,我们通过查看资源配置文件,可以看到以下内容

strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate

此为滚动更新策略,

maxSurge: 滚动更新过程中最大Pod副本数,确保在更新时启动的Pod数量比期望(replicas)Pod数量最大多出25%(默认值)

maxUnavailable:滚动更新过程中最大不可用Pod副本数,确保在更新时最大25%(默认值)Pod数量不可用,即确保75%Pod数量是可用状态

示例

[root@master ~]# kubectl get pod	//目前为4个Pod,版本为v2
NAME                      READY   STATUS    RESTARTS   AGE
apache-7f6fd56575-82gch   1/1     Running   0          76s
apache-7f6fd56575-j77lb   1/1     Running   0          75s
apache-7f6fd56575-vpgds   1/1     Running   0          75s
apache-7f6fd56575-wc589   1/1     Running   0          76s

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
spec: 
  replicas: 4			//期望Pod数量改为4,方便计算
  selector: 
    matchLabels: 
      app: httpd
  strategy: 
    rollingUpdata: 
      maxSurge: 25%		//添加策略,此数值根据期望Pod数量而改变
      maxUnavailable: 25%
    type: RollingUpdate
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1	//版本改为v1,进行更新
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl apply -f test.yml 
deployment.apps/apache configured

[root@master ~]# kubectl get pod		//更新过程
NAME                      READY   STATUS              RESTARTS   AGE
apache-599c45c854-5px8m   1/1     Running             0          6m8s
apache-599c45c854-6fpzv   1/1     Running             0          6m7s
apache-599c45c854-j24d8   1/1     Terminating         0          6m7s
apache-599c45c854-nmvxd   1/1     Running             0          6m8s
apache-7f6fd56575-7gv6f   0/1     ContainerCreating   0          1s
apache-7f6fd56575-zmmv7   0/1     ContainerCreating   0          1s

[root@master ~]# kubectl get pod		//更新结果
NAME                      READY   STATUS    RESTARTS   AGE
apache-7f6fd56575-7gv6f   1/1     Running   0          85s
apache-7f6fd56575-9b5mm   1/1     Running   0          84s
apache-7f6fd56575-r9j4h   1/1     Running   0          83s
apache-7f6fd56575-zmmv7   1/1     Running   0          85s

根据过程我们可以看到,先停一个启动两个,但这和之前的停一启二不一样,这个是根据设定值计算得出:

maxSurge=期望Pod数+期望Pod数的25%
这里为4+1=5,所以原本为4个,停了1个为3个,所以可以启动2个,而达到最大值5个

maxUnavailable=期望Pod数-期望Pod数的25%
这里为4-1=3,也就是原本为4个,而为了服务不受影响,所以只能停止1个,而达到最小值3个

25%为默认值也为设定值,也可以设置为50%,数值自己根据此公式套用即可

提问:遇到单数时怎么办?

答案是:此设定值会自动选择最优路线

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
spec: 
  replicas: 5			//期望值为5
  selector: 
    matchLabels: 
      app: httpd
  strategy: 
    rollingUpdate: 
      maxSurge: 50%		//用50%进行举例
      maxUnavailable: 50%
    type: RollingUpdate
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v2
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl get pod		//更新过程
NAME                      READY   STATUS              RESTARTS   AGE
apache-599c45c854-4rd5d   0/1     ContainerCreating   0          2s
apache-599c45c854-m8tq4   0/1     ContainerCreating   0          2s
apache-599c45c854-vlqpb   0/1     ContainerCreating   0          2s
apache-599c45c854-w6vfl   0/1     ContainerCreating   0          2s
apache-599c45c854-xcklf   0/1     ContainerCreating   0          2s
apache-7f6fd56575-7gv6f   1/1     Running             0          14m
apache-7f6fd56575-9b5mm   1/1     Terminating         0          14m
apache-7f6fd56575-9xckz   0/1     Terminating         0          2s
apache-7f6fd56575-r9j4h   1/1     Running             0          14m
apache-7f6fd56575-zmmv7   1/1     Running             0          14m

[root@master ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
apache-599c45c854-4rd5d   1/1     Running   0          4m58s
apache-599c45c854-m8tq4   1/1     Running   0          4m58s
apache-599c45c854-vlqpb   1/1     Running   0          4m58s
apache-599c45c854-w6vfl   1/1     Running   0          4m58s
apache-599c45c854-xcklf   1/1     Running   0          4m58s

根据过程再套用公式可以看出:
maxSurge在这里为8个,采用的是5+3=8
maxUnavailable在这里为2,采用的是5-2=3
即使用的是最优方式,也自动套用了四舍五入进行计算得出的

水平扩缩容

第三步:水平扩缩容(启动多实例,提高并发)

  • 修改yaml里replicas值,再apply
  • kubectl scale deployment web --replicas=10

注:replicas参数控制Pod副本数量

[root@master ~]# kubectl get pod		//目前为5个
NAME                      READY   STATUS    RESTARTS   AGE
apache-599c45c854-4rd5d   1/1     Running   0          10m
apache-599c45c854-m8tq4   1/1     Running   0          10m
apache-599c45c854-vlqpb   1/1     Running   0          10m
apache-599c45c854-w6vfl   1/1     Running   0          10m
apache-599c45c854-xcklf   1/1     Running   0          10m

[root@master ~]# cat test.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
spec: 
  replicas: 10		//修改为10个
  selector: 
    matchLabels: 
      app: httpd
  strategy: 
    rollingUpdate: 
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v2
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl apply -f test.yml 
deployment.apps/apache configured

[root@master ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
apache-599c45c854-4rd5d   1/1     Running   0          12m
apache-599c45c854-bpmsv   1/1     Running   0          3s
apache-599c45c854-m8tq4   1/1     Running   0          12m
apache-599c45c854-mpczb   1/1     Running   0          3s
apache-599c45c854-r7jpt   1/1     Running   0          3s
apache-599c45c854-v6vbf   1/1     Running   0          3s
apache-599c45c854-vlqpb   1/1     Running   0          12m
apache-599c45c854-w6vfl   1/1     Running   0          12m
apache-599c45c854-xcklf   1/1     Running   0          12m
apache-599c45c854-xl8vf   1/1     Running   0          3s

//再修改为5个
[root@master ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
apache-599c45c854-4rd5d   1/1     Running   0          13m
apache-599c45c854-m8tq4   1/1     Running   0          13m
apache-599c45c854-vlqpb   1/1     Running   0          13m
apache-599c45c854-w6vfl   1/1     Running   0          13m
apache-599c45c854-xcklf   1/1     Running   0          13m

回滚

第四步:回滚(发布失败恢复正常版本)

kubectl rollout history deployment/apache # 查看历史发布版本
kubectl rollout undo deployment/apache # 回滚到上一个版本
kubectl rollout undo deployment/apache --to-revision=v1 # 回滚历史指定版本

注:回滚是重新部署某一次部署时的状态,即当时版本所有配置

查看历史发布版本

[root@master ~]# kubectl rollout history deploy/apache
deployment.apps/apache 
REVISION  CHANGE-CAUSE
5         <none>
6         <none>

回滚到上一个版本

[root@master ~]# kubectl rollout undo deployment/apache
deployment.apps/apache rolled back

回滚操作不会使用清单文件,但是可以使用监控软件搭配shell脚本执行命令进行自动回滚

删除

最后,项目下线:

kubectl delete deploy/apache
kubectl delete svc/apache

ReplicaSet控制器用途

  • Pod副本数量管理,不断对比当前Pod数量与期望Pod数量
  • Deployment每次发布都会创建一个RS作为记录,用于实现回滚
kubectl get rs # 查看RS记录
kubectl rollout history deployment apache # 版本对应RS记录

查看RS记录

[root@master ~]# kubectl get rs
NAME                DESIRED   CURRENT   READY   AGE
apache-599c45c854   0         0         0       67m
apache-7f6fd56575   5         5         5       69m

版本对应RS记录

[root@master ~]# kubectl rollout history deployment apache
deployment.apps/apache 
REVISION  CHANGE-CAUSE
6         <none>
7         <none>

提问:在滚动更新时,去查看rs会不会出现两个版本都有数值的情况?
答:会出现,也就是说在滚动更新时,如果Pod数量太多,可以用rs来查看副本更新情况,确认Pod是否更新完成

DaemonSet

DaemonSet功能:
在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod

应用场景:网络插件(kube-proxy、calico),其他Agent

部署一个日志采集程序

[root@master ~]# kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-75prk                     1/1     Running   0          164m
coredns-7f89b7bc75-r9xg6                     1/1     Running   0          164m
etcd-master.example.com                      1/1     Running   0          164m
kube-apiserver-master.example.com            1/1     Running   0          164m
kube-controller-manager-master.example.com   1/1     Running   0          164m
kube-flannel-ds-6p4ft                        1/1     Running   0          160m
kube-flannel-ds-9f9hs                        1/1     Running   0          160m
kube-flannel-ds-r84pd                        1/1     Running   0          160m
kube-proxy-fxzxn                             1/1     Running   0          160m
kube-proxy-jsf52                             1/1     Running   0          160m
kube-proxy-mf7ht                             1/1     Running   0          164m
kube-scheduler-master.example.com            1/1     Running   0          164m

[root@master ~]# cat daemon.yml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: filebeat
  namespace: kube-system
spec: 
  selector: 
    matchLabels: 
      app: filebeat
  template: 
    metadata: 
      labels: 
        app: filebeat
    spec: 
      containers: 
      - name: log
        image: elastic/filebeat:7.16.2
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl apply -f daemon.yml 
daemonset.apps/filebeat created

[root@master ~]# kubectl get pods -n kube-system
NAME                                         READY   STATUS              RESTARTS   AGE
coredns-7f89b7bc75-75prk                     1/1     Running             0          171m
coredns-7f89b7bc75-r9xg6                     1/1     Running             0          171m
etcd-master.example.com                      1/1     Running             0          171m
filebeat-kcm5x                               0/1     ContainerCreating   0          10s			
filebeat-xxkxg                               0/1     ContainerCreating   0          10s
kube-apiserver-master.example.com            1/1     Running             0          171m
kube-controller-manager-master.example.com   1/1     Running             0          171m
kube-flannel-ds-6p4ft                        1/1     Running             0          168m
kube-flannel-ds-9f9hs                        1/1     Running             0          168m
kube-flannel-ds-r84pd                        1/1     Running             0          168m
kube-proxy-fxzxn                             1/1     Running             0          168m
kube-proxy-jsf52                             1/1     Running             0          168m
kube-proxy-mf7ht                             1/1     Running             0          171m
kube-scheduler-master.example.com            1/1     Running             0          171m

[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                                         READY   STATUS    RESTARTS   AGE    IP                NODE                 NOMINATED NODE   READINESS GATES
coredns-7f89b7bc75-75prk                     1/1     Running   0          172m   10.244.0.3        master.example.com   <none>           <none>
coredns-7f89b7bc75-r9xg6                     1/1     Running   0          172m   10.244.0.2        master.example.com   <none>           <none>
etcd-master.example.com                      1/1     Running   0          172m   192.168.160.123   master.example.com   <none>           <none>
filebeat-kcm5x                               1/1     Running   0          59s    10.244.1.20       node1.example.com    <none>           <none>		//在已有节点上运行
filebeat-xxkxg                               1/1     Running   0          59s    10.244.2.21       node2.example.com    <none>           <none>		
kube-apiserver-master.example.com            1/1     Running   0          172m   192.168.160.123   master.example.com   <none>           <none>
kube-controller-manager-master.example.com   1/1     Running   0          172m   192.168.160.123   master.example.com   <none>           <none>
kube-flannel-ds-6p4ft                        1/1     Running   0          168m   192.168.160.125   node2.example.com    <none>           <none>
kube-flannel-ds-9f9hs                        1/1     Running   0          169m   192.168.160.124   node1.example.com    <none>           <none>
kube-flannel-ds-r84pd                        1/1     Running   0          169m   192.168.160.123   master.example.com   <none>           <none>
kube-proxy-fxzxn                             1/1     Running   0          169m   192.168.160.124   node1.example.com    <none>           <none>
kube-proxy-jsf52                             1/1     Running   0          168m   192.168.160.125   node2.example.com    <none>           <none>
kube-proxy-mf7ht                             1/1     Running   0          172m   192.168.160.123   master.example.com   <none>           <none>
kube-scheduler-master.example.com            1/1     Running   0          172m   192.168.160.123   master.example.com   <none>           <none>

新加入一个node3,查看是否会自动运行
具体部署方式,可查看我的K8s集群部署文章

[root@node3 ~]# kubeadm join 192.168.160.123:6443 --token y9uv8d.q3891g4ksnt8yf9v     --discovery-token-ca-cert-hash sha256:4c9a69695ae6e6881906de07a8851669874ae9102a2b6e184b310d632081ad74

[root@master ~]# kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-75prk                     1/1     Running   3          15h
coredns-7f89b7bc75-r9xg6                     1/1     Running   3          15h
etcd-master.example.com                      1/1     Running   3          15h
filebeat-kcm5x                               1/1     Running   3          12h
filebeat-vgh2g                               1/1     Running   0          2m7s
filebeat-xxkxg                               1/1     Running   3          12h
kube-apiserver-master.example.com            1/1     Running   3          15h
kube-controller-manager-master.example.com   1/1     Running   3          15h
kube-flannel-ds-6p4ft                        1/1     Running   3          15h
kube-flannel-ds-9f9hs                        1/1     Running   3          15h
kube-flannel-ds-nmbf2                        1/1     Running   0          3m19s
kube-flannel-ds-r84pd                        1/1     Running   3          15h
kube-proxy-fxzxn                             1/1     Running   3          15h
kube-proxy-jsf52                             1/1     Running   3          15h
kube-proxy-mf7ht                             1/1     Running   3          15h
kube-proxy-s8ghc                             1/1     Running   0          3m19s
kube-scheduler-master.example.com            1/1     Running   3          15h

Job

Job分为普通任务(Job)和定时任务(CronJob)

  • 一次性执行

应用场景:离线数据处理,视频解码等业务

示例

[root@master ~]# cat job.yml 
---
apiVersion: batch/v1
kind: Job
metadata: 
  name: test
spec: 
  template: 
    spec: 
      containers: 
      - name: test
        image: busybox
        command: ["/bin/sh","-c","echo hello"]
      restartPolicy: Never
  backoffLimit: 3

[root@master ~]# kubectl apply -f job.yml 
job.batch/test created

[root@master ~]# kubectl get pods
NAME         READY   STATUS      RESTARTS   AGE
test-2lgm4   0/1     Completed   0          26s		//完成就退出,所以不会为启动状态

[root@master ~]# kubectl describe job/test		//查看容器信息
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed		//可以看到创建成功
Pod Template:
  Labels:  controller-uid=2d7f1c9d-011b-4280-95ae-03d795431b43
           job-name=test
  Containers:
   test:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
      -c
      echo hello
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  74s   job-controller  Created pod: test-2lgm4
  Normal  Completed         57s   job-controller  Job completed

CronJob

CronJob用于实现定时任务,像Linux的Crontab一样

  • 定时任务

应用场景:通知,备份

示例

[root@master ~]# cat cronjob.yml 
---
apiVersion: batch/v1beta1
kind: CronJob
metadata: 
  name: custom
spec: 
  schedule: "*/1 * * * *"		//每分钟执行
  jobTemplate: 
    spec: 
      template: 
        spec: 
          containers: 
          - name: custom
            image: busybox
            command: 
            - /bin/sh
            - "-c"
            - "date;echo hello world"
          restartPolicy: OnFailure

[root@master ~]# kubectl apply -f cronjob.yml 
cronjob.batch/custom created

[root@master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
custom-1640371500-jrc5k   0/1     ContainerCreating   0          9s		//经过等待执行完成
test-2lgm4                0/1     Completed           0          11m
1

评论区