目 录CONTENT

文章目录

K8s Service对外暴露应用

ZiChen D
2021-12-26 / 0 评论 / 1 点赞 / 567 阅读 / 16,512 字 / 正在检测是否收录...

Service对外暴露应用

Service是什么

Service一个应用服务抽象,定义了Pod逻辑集合和访问这个Pod集合的策略。
Service代理Pod集合对外表现是为一个访问入口,分配一个集群IP地址,来自这个IP的请求将负载均衡转发后端Pod中的容器。
Service通过LableSelector选择一组Pod提供服务。

在K8s集群中,客户端需要访问的服务就是Service对象。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。在K8s集群中微服务的负载均衡是由Kube-proxy实现的。Kube-proxy是K8s集群内部的负载均衡器。它是一个分布式代理服务器,在K8s的每个节点上都有一个;这一设计体现了它的伸缩性优势,需要访问服务的节点越多,提供负载均衡能力的Kube-proxy就越多,高可用节点也随之增多。与之相比,我们平时在服务器端做个反向代理做负载均衡,还要进一步解决反向代理的负载均衡和高可用问题。

“Service微服务”,kubernetes中的核心。通过分析、识别并建模系统中的所有服务为微服务,最终系统有多个提供不同业务能力而又彼此独立的微服务单元所组成,服务之间通过TCP/IP进行通信。每个Pod都会被分配一个单独的IP地址,而且每个Pod都提供了一个独立的Endpoint以被客户端访问。

客户端想要访问到Pod中的服务需要 部署负载均衡器,为Pod开启对外服务端口,将Pod的Endpoint列表加入转发列表中,客户端通过负载均衡器的对外IP+Port来访问此服务。每个Service都有一个全局唯一的虚拟ClusterIP,这样每个服务就变成了具备唯一IP地址的“通信节点”,服务调用就变成了最基础的TCP网络通信问题。

Service存在的意义

  • Service引入主要是解决Pod的动态变化,提供统一访问入口:
  • 防止Pod失联,准备找到提供同一个服务的Pod(服务发现)
    定义一组Pod的访问策略(负载均衡)

Pod与Service的关系

  • Service通过标签关联一组Pod
  • Service使用iptables或者ipvs(LVS)为一组Pod提供负载均衡能力

默认使用iptables,通过NAT转换方式提供负载均衡

Service的定义与创建

创建Service:

kubectl apply -f service.yml

查看Service

kubectl get service

使用清单文件方式创建一个Service

[root@master ~]# cat service.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
  namespace: default
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1
        imagePullPolicy: IfNotPresent

---
apiVersion: v1
kind: Service
metadata: 
  name: web
spec: 
  ports: 
  - port: 8080
    targPort: 80
  selector: 
    app: httpd

[root@master ~]# kubectl apply -f service.yml 
deployment.apps/apache unchanged
service/web created

[root@master ~]# kubectl get pods,svc
NAME                          READY   STATUS    RESTARTS   AGE
pod/apache-7f6fd56575-dp9fw   1/1     Running   0          94s
pod/apache-7f6fd56575-fdxzk   1/1     Running   0          94s
pod/apache-7f6fd56575-kd6tm   1/1     Running   0          94s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    47h
service/web          ClusterIP   10.96.47.226   <none>        8080/TCP   73s

//使用IP访问
[root@master ~]# curl http://10.96.47.226:8080
test page on v1

//进入容器使用容器网络与service名称访问
[root@master ~]# kubectl run -it test1 --image busybox -- /bin/sh --rm
If you don't see a command prompt, try pressing enter.
/ # wget -q -O - http://web:8080
test page on v1

Service常用类型

  • ClusterIP:集群内部使用
  • NodePort:对外暴露应用(集群外)
  • LoadBalancer:对外暴露应用,适用于公有云

ClusterIP

默认使用的类型,分配一个稳定的IP地址,即VIP,只能在集群内部访问

示例:

[root@master ~]# cat service.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
  namespace: default
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1
        imagePullPolicy: IfNotPresent

---
apiVersion: v1
kind: Service
metadata: 
  name: web
spec: 
  type: ClusterIP		//默认类型
  ports: 
  - port: 80
    targetPort: 80		//暴露80端口号,让用户能够访问
    protocol: TCP
  selector: 
    app: httpd

[root@master ~]# kubectl apply -f service.yml 
deployment.apps/apache unchanged
service/web configured

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   2d
web          ClusterIP   10.96.47.226   <none>        80/TCP    35m

NodePort

在每个节点上启用一个端口来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址
访问地址:<任意NodeIP>:
端口范围:30000-32767

会在每台Node上监听端口接收用户流量,在实际情况下,对用户暴露的只会有一个IP和端口,那这么多台Node该使用哪台让用户访问呢?
这时就需要前面加一个公网负载均衡器为项目提供统一访问入口了。

示例:

[root@master ~]# cat service.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
  namespace: default
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1
        imagePullPolicy: IfNotPresent

---
apiVersion: v1
kind: Service
metadata: 
  name: web
spec: 
  type: NodePort		//此时未指定端口号具体为多少
  ports: 
  - port: 8080
    targetPort: 80
  selector: 
    app: httpd

[root@master ~]# kubectl apply -f service.yml 
deployment.apps/apache unchanged
service/web configured

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        2d
web          NodePort    10.96.47.226   <none>        80:32561/TCP   43m		//此时为随机分配端口号

[root@master ~]# cat service.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: apache
  namespace: default
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: dengzichen/httpd:v1
        imagePullPolicy: IfNotPresent

---
apiVersion: v1
kind: Service
metadata: 
  name: web
spec: 
  type: NodePort
  ports: 
  - port: 80
    targetPort: 80
    protocol: TCP
    nodePort: 30000		//指定一个30000端口号
  selector: 
    app: httpd

[root@master ~]# kubectl apply -f service.yml 
deployment.apps/apache unchanged
service/web configured
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        2d
web          NodePort    10.96.47.226   <none>        80:30000/TCP   44m		//此时端口号为指定的30000端口号

使用主机IP+30000端口在浏览器上访问:

查看一下信息:

[root@master ~]# iptables-save | grep web
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.47.226/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.47.226/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 80 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 30000 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 30000 -j KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-KHDGDLGU22BZNLDW
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IJH57ZA777JTMJOL
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-HGBICGJPF5C5CTWI
-A KUBE-SEP-KHDGDLGU22BZNLDW -s 10.244.1.144/32 -m comment --comment "default/we" -j KUBE-MARK-MASQ
-A KUBE-SEP-KHDGDLGU22BZNLDW -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.1.144:80
-A KUBE-SEP-IJH57ZA777JTMJOL -s 10.244.2.138/32 -m comment --comment "default/we" -j KUBE-MARK-MASQ
-A KUBE-SEP-IJH57ZA777JTMJOL -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.2.138:80
-A KUBE-SEP-HGBICGJPF5C5CTWI -s 10.244.2.139/32 -m comment --comment "default/we" -j KUBE-MARK-MASQ
-A KUBE-SEP-HGBICGJPF5C5CTWI -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.2.139:80

LoadBalancer

与NodePort类似,在每个节点上启用一个端口来暴露服务。除此之外,kubernetes会请求底层云平台(例如阿里云、腾讯云、AWS等)上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。

Service代理模式

Iptables

IPVS

Service的底层实现主要有iptables和ipvs两种网络模式,决定了如何转发流量

kubeadm方式修改ipvs模式

[root@master ~]# kubectl edit configmap kube-proxy -n kube-system
……
   kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"		//修改为ipvs
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
……
configmap/kube-proxy edited

[root@master ~]# kubectl dekete pod kube-proxy -n kube-system		//kube-proxy后面不带节点ID,则为删除所有

注意:

  1. kube-proxy配置文件以configmap方式存储
  2. 如果让所有节点生效,需要重建所有节点kube-proxy pod

二进制方式修改ipvs模式

# vi kube-proxy-config.yml
mode: ipvs
ipvs:
  scheduler: "rr"

# systemctl restart kube-proxy

注:

  1. 参考不同资料,文件名可能不同
  2. 此文件需要自己手动去写,而本地没有,所以在此只写方法不做演示

流量包流程

流量包流程:

客户端 —> NodePort/ClusterIP(iptables/ipvs负载均衡规则) —> 分布在各节点Pod

查看负载均衡规则:
iptables模式

iptables-save | grep <SERVICE-NAME>

ipvs模式

ipvsadm -L -n

Service工作流程图

Iptables与IPVS对比

Iptables:

  • 灵活,功能强大
  • 规则遍历匹配和更新,呈线性时延

IPVS:

  • 工作在内核态,有更好的性能
  • 调度算法丰富:rr, wrr, lc, wlc, ip hash...

CoreDNS

是一个DNS服务器,KUbernetes默认采用,以Pod部署在集群中,CoreDNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。

CoreDNS YAML文件:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns

ClisterIP A记录格式: ..svc.cluster.local
示例:web.default.svc.cluster.local
其中.svc.cluster.local为固定写法

//在Pod里
/ # nslookup web
Server:		10.96.0.10
Address:	10.96.0.10:53

Name:	web.default.svc.cluster.local
Address: 10.96.47.226

工作流程图

操作实验

创建一个deployment副本数3,然后滚动更新镜像版本,并记录这个更新记录,最后再回滚到上一个版本

[root@master shiyan]# kubectl create deployment test1 --image dengzichen/httpd:v1 --replicas 3
deployment.apps/test1 created
[root@master shiyan]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
test1-78df696d6-57vs8   1/1     Running   0          3s
test1-78df696d6-8rbqb   1/1     Running   0          3s
test1-78df696d6-t2h5p   1/1     Running   0          3s

[root@master shiyan]# kubectl expose deploy test1 --port 80 --type NodePort
service/test1 exposed
[root@master shiyan]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2d2h
test1        NodePort    10.110.141.92   <none>        80:31002/TCP   4s
web          NodePort    10.96.47.226    <none>        80:30000/TCP   156m

[root@master shiyan]# curl http://10.110.141.92
test page on v1

[root@master shiyan]# kubectl set image deploy/test1 httpd=dengzichen/httpd:v2
deployment.apps/test1 image updated
[root@master shiyan]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
test1-7698566d5f-6k2ps   1/1     Running   0          99s
test1-7698566d5f-bmbm7   1/1     Running   0          102s
test1-7698566d5f-dnggs   1/1     Running   0          100s

[root@master shiyan]# curl http://10.110.141.92
test page on v2

[root@master shiyan]# kubectl get rs
NAME               DESIRED   CURRENT   READY   AGE
test1-7698566d5f   3         3         3       7m45s
test1-78df696d6    0         0         0       9m46s

[root@master shiyan]# kubectl rollout undo deploy/test1
deployment.apps/test1 rolled back
[root@master shiyan]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
test1-78df696d6-66dht   1/1     Running   0          70s
test1-78df696d6-7w4gb   1/1     Running   0          71s
test1-78df696d6-sgp7s   1/1     Running   0          72s

[root@master shiyan]# curl http://10.110.141.92
test page on v1

给一个应用扩容副本数为3

[root@master shiyan]# kubectl create deployment test2 --image dengzichen/httpd:v1
deployment.apps/test2 created
[root@master shiyan]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
test1-78df696d6-66dht    1/1     Running   0          18m
test1-78df696d6-7w4gb    1/1     Running   0          18m
test1-78df696d6-sgp7s    1/1     Running   0          18m
test2-7dd789bd7c-q772z   1/1     Running   0          2s

[root@master shiyan]# kubectl scale --replicas 3 deployment/test2
deployment.apps/test2 scaled
[root@master shiyan]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
test1-78df696d6-66dht    1/1     Running   0          18m
test1-78df696d6-7w4gb    1/1     Running   0          18m
test1-78df696d6-sgp7s    1/1     Running   0          19m
test2-7dd789bd7c-2cjv5   1/1     Running   0          17s
test2-7dd789bd7c-8d4fm   1/1     Running   0          17s
test2-7dd789bd7c-q772z   1/1     Running   0          53s

创建一个pod,其中运行着nginx、redis、memcached 3个容器

[root@master shiyan]# cat test3.yml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: test3
  namespace: default
spec: 
  selector: 
    metchLabels: 
      app: test3
  template: 
    metadata: 
      labels: 
        app: test3
    spec: 
      containers: 
      - name: nginx
        image: library/nginx
      - name: redis
        image: library/redis
      - name: memcached
        image: library/memcached
...

[root@master shiyan]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
test1-78df696d6-66dht    1/1     Running   0          18m
test1-78df696d6-7w4gb    1/1     Running   0          18m
test1-78df696d6-sgp7s    1/1     Running   0          19m
test2-7dd789bd7c-2cjv5   1/1     Running   0          17s
test2-7dd789bd7c-8d4fm   1/1     Running   0          17s
test2-7dd789bd7c-q772z   1/1     Running   0          53s
test3-8587fd4c5-hq7s2    3/3     Running   0          7m50s

[root@master shiyan]# kubectl apply -f test3.yml 
deployment.apps/test3 created

[root@master shiyan]# kubectl describe deploy/test3
Name:                   test3
Namespace:              default
CreationTimestamp:      Mon, 27 Dec 2021 01:42:58 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=test3
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=test3
  Containers:
   nginx:
    Image:        library/nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
   redis:
    Image:        library/redis
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
   memcached:
    Image:        library/memcached
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   test3-8587fd4c5 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  2m33s  deployment-controller  Scaled up replica set test3-8587fd4c5 to 1

给一个pod创建service,并可以通过ClusterlP/NodePort访问

[root@master shiyan]# kubectl run test4 --image=dengzichen/httpd:v1
pod/test4 created

[root@master shiyan]# kubectl expose pod/test4 --port=80 --type=NodePort
service/test4 exposed

[root@master shiyan]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d2h
test1        NodePort    10.110.141.92    <none>        80:31002/TCP   35m
test4        NodePort    10.100.192.217   <none>        80:32081/TCP   22s

[root@master shiyan]# curl http://10.100.192.217
test page on v1

创建deployment和service,使用busybox容器nslookup解析service

[root@master shiyan]# kubectl create deploy test5 --image dengzichen/httpd:v1
deployment.apps/test5 created

[root@master shiyan]# kubectl expose deploy/test5 --port 80 --type ClusterIP
service/test5 exposed

[root@master ~]# cd shiyan/
[root@master shiyan]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d2h
test1        NodePort    10.110.141.92    <none>        80:31002/TCP   49m
test4        NodePort    10.100.192.217   <none>        80:32081/TCP   14m
test5        ClusterIP   10.111.170.228   <none>        80/TCP         66s

[root@master shiyan]# kubectl run -it test5 --image busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget -q -O - http://10.111.170.228
test page on v1
/ # nslookup test5
Server:		10.96.0.10
Address:	10.96.0.10:53

Name:	test5.default.svc.cluster.local
Address: 10.111.170.228
1

评论区