Kubernetes教程#

技术介绍#

Kubernetes(简称K8s)是一个开源的容器编排平台,用于自动部署、扩展和管理容器化应用程序。它由Google开发并维护,现已成为云原生技术的事实标准。Kubernetes提供了强大的功能,如服务发现、负载均衡、自动扩缩容、滚动更新等,使容器化应用的管理变得简单高效。

Kubernetes核心概念#

  • Pod:Kubernetes的最小调度单位,包含一个或多个容器
  • Service:定义了Pod的访问方式,提供负载均衡
  • Deployment:管理Pod的副本数量,实现滚动更新
  • StatefulSet:管理有状态应用的控制器
  • DaemonSet:在每个节点上运行一个Pod副本
  • ConfigMap:存储配置数据
  • Secret:存储敏感数据
  • Namespace:为资源提供隔离
  • PersistentVolume:持久化存储
  • PersistentVolumeClaim:申请持久化存储

Kubernetes架构#

  • 控制平面:包含API服务器、调度器、控制器管理器、etcd等组件
  • 节点:运行Pod的工作节点,包含kubelet、kube-proxy、容器运行时等组件
  • etcd:存储集群状态的分布式数据库
  • 网络插件:提供Pod间的网络连接

入门级使用#

安装Kubernetes#

在不同环境中安装Kubernetes:

# 使用kubeadm安装Kubernetes
# 安装kubeadm、kubelet和kubectl
apt update && apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# 初始化集群
kubeadm init --pod-network-cidr=10.244.0.0/16

# 配置kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 安装网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 加入节点
# 在工作节点上运行kubeadm join命令

基本Kubernetes命令#

使用Kubernetes的基本命令:

# 查看集群状态
kubectl cluster-info

# 查看节点
kubectl get nodes

# 查看Pod
kubectl get pods --all-namespaces

# 查看服务
kubectl get services --all-namespaces

# 查看部署
kubectl get deployments --all-namespaces

# 创建Pod
kubectl run nginx --image=nginx

# 查看Pod详情
kubectl describe pod nginx

# 删除Pod
kubectl delete pod nginx

# 创建服务
kubectl expose pod nginx --port=80 --type=NodePort

部署第一个应用#

部署一个简单的应用到Kubernetes:

# 创建Deployment
kubectl create deployment nginx --image=nginx --replicas=3

# 查看Deployment
kubectl get deployment nginx

# 查看Pod
kubectl get pods -l app=nginx

# 暴露服务
kubectl expose deployment nginx --port=80 --type=LoadBalancer

# 查看服务
kubectl get service nginx

# 访问应用
curl http://$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}'):$(kubectl get service nginx -o jsonpath='{.spec.ports[0].nodePort}')

初级使用#

Kubernetes网络配置#

配置Kubernetes网络:

# 查看网络插件
kubectl get pods -n kube-system | grep flannel

# 配置网络策略
kubectl apply -f network-policy.yaml

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

# 创建允许特定Pod访问的网络策略
kubectl apply -f allow-access.yaml

# allow-access.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nginx-access
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 80

Kubernetes存储配置#

配置Kubernetes存储:

# 创建PersistentVolume
kubectl apply -f pv.yaml

# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

# 创建PersistentVolumeClaim
kubectl apply -f pvc.yaml

# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

# 创建使用PVC的Pod
kubectl apply -f pod-with-pvc.yaml

# pod-with-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    volumeMounts:
    - name: my-volume
      mountPath: /usr/share/nginx/html
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: my-pvc

Kubernetes配置管理#

管理Kubernetes配置:

# 创建ConfigMap
kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2

# 查看ConfigMap
kubectl get configmap my-config

# 创建使用ConfigMap的Pod
kubectl apply -f pod-with-configmap.yaml

# pod-with-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    env:
    - name: KEY1
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: key1
    - name: KEY2
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: key2

# 创建Secret
kubectl create secret generic my-secret --from-literal=password=mysecretpassword

# 查看Secret
kubectl get secret my-secret

# 创建使用Secret的Pod
kubectl apply -f pod-with-secret.yaml

# pod-with-secret.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    env:
    - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: my-secret
          key: password

中级使用#

Kubernetes部署管理#

管理Kubernetes部署:

# 创建Deployment
kubectl apply -f deployment.yaml

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.10
        ports:
        - containerPort: 80

# 查看Deployment状态
kubectl rollout status deployment nginx-deployment

# 滚动更新
kubectl set image deployment nginx-deployment nginx=nginx:1.20.0

# 查看滚动更新状态
kubectl rollout status deployment nginx-deployment

# 回滚更新
kubectl rollout undo deployment nginx-deployment

# 查看Deployment历史
kubectl rollout history deployment nginx-deployment

Kubernetes服务管理#

管理Kubernetes服务:

# 创建ClusterIP服务
kubectl apply -f clusterip-service.yaml

# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-clusterip
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

# 创建NodePort服务
kubectl apply -f nodeport-service.yaml

# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

# 创建LoadBalancer服务
kubectl apply -f loadbalancer-service.yaml

# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer

Kubernetes健康检查#

配置Kubernetes健康检查:

# 创建带有健康检查的Deployment
kubectl apply -f deployment-with-healthcheck.yaml

# deployment-with-healthcheck.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 15
          periodSeconds: 20

中上级使用#

Kubernetes集群管理#

管理Kubernetes集群:

# 查看节点状态
kubectl get nodes

# 查看节点详情
kubectl describe node node1

# 标记节点为不可调度
kubectl cordon node1

# 排空节点
kubectl drain node1 --ignore-daemonsets

# 标记节点为可调度
kubectl uncordon node1

# 升级集群
kubeadm upgrade plan
kubeadm upgrade apply v1.22.0

# 查看集群版本
kubectl version

Kubernetes资源管理#

管理Kubernetes资源:

# 创建ResourceQuota
kubectl apply -f resourcequota.yaml

# resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: my-quota
  namespace: default
spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: 4Gi
    limits.cpu: "8"
    limits.memory: 8Gi

# 创建LimitRange
kubectl apply -f limitrange.yaml

# limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: my-limitrange
  namespace: default
spec:
  limits:
  - default:
      cpu: 1
      memory: 1Gi
    defaultRequest:
      cpu: 500m
      memory: 512Mi
    type: Container

# 创建带有资源限制的Pod
kubectl apply -f pod-with-resources.yaml

# pod-with-resources.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    resources:
      requests:
        cpu: 500m
        memory: 512Mi
      limits:
        cpu: 1
        memory: 1Gi

Kubernetes安全配置#

配置Kubernetes安全:

# 创建ServiceAccount
kubectl create serviceaccount my-serviceaccount

# 查看ServiceAccount
kubectl get serviceaccount my-serviceaccount

# 创建ClusterRole
kubectl apply -f clusterrole.yaml

# clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: my-clusterrole
rules:
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list", "watch"]

# 创建ClusterRoleBinding
kubectl apply -f clusterrolebinding.yaml

# clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: my-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: my-serviceaccount
  namespace: default
roleRef:
  kind: ClusterRole
  name: my-clusterrole
  apiGroup: rbac.authorization.k8s.io

# 创建使用ServiceAccount的Pod
kubectl apply -f pod-with-serviceaccount.yaml

# pod-with-serviceaccount.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  serviceAccountName: my-serviceaccount
  containers:
  - name: my-container
    image: nginx

高级使用#

Kubernetes集群监控#

监控Kubernetes集群:

# 安装Prometheus和Grafana
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0alertmanagerConfigCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0podmonitorCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0probesCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0prometheusRuleCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0prometheusCustomResourceDefinition.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/setup/prometheus-operator-0alertmanagerCustomResourceDefinition.yaml

# 安装Prometheus和Grafana
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/prometheus-operator-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/grafana-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/prometheus-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/alertmanager-deployment.yaml

# 查看监控组件
kubectl get pods -n monitoring

# 访问Grafana
kubectl port-forward deployment/grafana 3000:3000 -n monitoring

Kubernetes日志管理#

管理Kubernetes日志:

# 安装ELK Stack
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/1.7.0/config/recipes/elasticsearch/quickstart.yaml
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/1.7.0/config/recipes/kibana/quickstart.yaml
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/1.7.0/config/recipes/filebeat/quickstart.yaml

# 查看ELK组件
kubectl get pods -n default | grep elastic
kubectl get pods -n default | grep kibana
kubectl get pods -n default | grep filebeat

# 访问Kibana
kubectl port-forward deployment/kibana-kb-http 5601:5601

# 查看Pod日志
kubectl logs pod/my-pod

# 查看容器日志
kubectl logs pod/my-pod -c my-container

# 流式查看日志
kubectl logs -f pod/my-pod

Kubernetes自动扩缩容#

配置Kubernetes自动扩缩容:

# 安装Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml

# 查看Metrics Server
kubectl get pods -n kube-system | grep metrics-server

# 创建HorizontalPodAutoscaler
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10

# 查看HorizontalPodAutoscaler
kubectl get hpa nginx-deployment

# 创建VerticalPodAutoscaler
kubectl apply -f vpa.yaml

# vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: nginx-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
  updatePolicy:
    updateMode: "Auto"

# 查看VerticalPodAutoscaler
kubectl get vpa nginx-vpa

大师级使用#

Kubernetes多集群管理#

管理多个Kubernetes集群:

# 安装Rancher
kubectl apply -f https://raw.githubusercontent.com/rancher/rancher/v2.7.0/packaged/rancher.yaml

# 查看Rancher部署状态
kubectl get pods -n cattle-system

# 访问Rancher
kubectl port-forward deployment/rancher 8080:80 -n cattle-system

# 添加集群到Rancher
# 在Rancher UI中添加集群

# 查看集群状态
kubectl get clusters -n cattle-system

Kubernetes云原生应用#

部署云原生应用到Kubernetes:

# 部署微服务应用
kubectl apply -f microservices.yaml

# microservices.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
      - name: user-service
        image: my-registry/user-service:latest
        ports:
        - containerPort: 8080
        env:
        - name: DB_HOST
          value: db-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
    spec:
      containers:
      - name: order-service
        image: my-registry/order-service:latest
        ports:
        - containerPort: 8081
        env:
        - name: DB_HOST
          value: db-service
        - name: USER_SERVICE_HOST
          value: user-service
---
apiVersion: v1
kind: Service
metadata:
  name: user-service
spec:
  selector:
    app: user-service
  ports:
  - port: 8080
    targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
  - port: 8081
    targetPort: 8081
---
apiVersion: v1
kind: Service
metadata:
  name: db-service
spec:
  selector:
    app: db
  ports:
  - port: 5432
    targetPort: 5432

Kubernetes安全审计#

进行Kubernetes安全审计:

# 配置审计日志
vi /etc/kubernetes/manifests/kube-apiserver.yaml

# 添加审计日志配置
- --audit-log-path=/var/log/kubernetes/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml

# 创建审计策略文件
vi /etc/kubernetes/audit-policy.yaml

# 添加审计策略
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

# 重启kube-apiserver
kubectl delete pod kube-apiserver-master -n kube-system

# 查看审计日志
cat /var/log/kubernetes/audit.log

实战案例#

案例一:部署高可用应用#

场景:部署一个高可用的Web应用,包含前端、后端和数据库。

解决方案:使用Kubernetes部署多副本应用,配置持久化存储和服务发现。

实施步骤

  1. 创建命名空间

    kubectl create namespace my-app
  2. 部署数据库

    kubectl apply -f db-deployment.yaml -n my-app
  3. 部署后端服务

    kubectl apply -f backend-deployment.yaml -n my-app
  4. 部署前端服务

    kubectl apply -f frontend-deployment.yaml -n my-app
  5. 配置Ingress

    kubectl apply -f ingress.yaml -n my-app

结果

  • 成功部署了高可用的Web应用
  • 配置了前端、后端和数据库服务
  • 实现了服务的自动发现和负载均衡

案例二:CI/CD集成#

场景:集成Kubernetes与CI/CD系统,实现自动化部署。

解决方案:使用GitLab CI或Jenkins与Kubernetes集成。

实施步骤

  1. 配置GitLab CI

    # .gitlab-ci.yml
    stages:
      - build
      - test
      - deploy
    
    build:
      stage: build
      script:
        - docker build -t my-registry/my-app:$CI_COMMIT_SHA .
        - docker push my-registry/my-app:$CI_COMMIT_SHA
    
    test:
      stage: test
      script:
        - docker run my-registry/my-app:$CI_COMMIT_SHA npm test
    
    deploy:
      stage: deploy
      script:
        - kubectl set image deployment/my-app my-app=my-registry/my-app:$CI_COMMIT_SHA -n my-app
        - kubectl rollout status deployment/my-app -n my-app
      only:
        - main
  2. 配置Jenkins

    // Jenkinsfile
    pipeline {
        agent any
        stages {
            stage('Build') {
                steps {
                    sh 'docker build -t my-registry/my-app:$BUILD_NUMBER .'
                    sh 'docker push my-registry/my-app:$BUILD_NUMBER'
                }
            }
            stage('Test') {
                steps {
                    sh 'docker run my-registry/my-app:$BUILD_NUMBER npm test'
                }
            }
            stage('Deploy') {
                steps {
                    sh 'kubectl set image deployment/my-app my-app=my-registry/my-app:$BUILD_NUMBER -n my-app'
                    sh 'kubectl rollout status deployment/my-app -n my-app'
                }
            }
        }
    }

结果

  • 成功集成了Kubernetes与CI/CD系统
  • 实现了自动化构建、测试和部署
  • 提高了开发和部署效率

案例三:监控和日志系统#

场景:部署监控和日志系统,实现集群的可观测性。

解决方案:使用Prometheus、Grafana和ELK Stack。

实施步骤

  1. 部署Prometheus和Grafana

    kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/kube-prometheus.yaml
  2. 部署ELK Stack

    kubectl apply -f elk-stack.yaml
  3. 配置告警

    kubectl apply -f alerts.yaml

结果

  • 成功部署了监控和日志系统
  • 实现了集群的可观测性
  • 配置了告警系统,及时发现和处理问题

总结#

Kubernetes是一种强大的容器编排平台,通过本教程的学习,您已经掌握了从入门到大师级的Kubernetes使用技术。

主要技术回顾#

  • 基础操作:Kubernetes的基本安装和配置
  • 集群管理:创建和管理Kubernetes集群
  • 应用部署:部署应用到Kubernetes集群
  • 高级功能:网络配置、存储管理、安全配置
  • 企业级应用:高可用性、监控、日志管理

最佳实践#

  1. 集群规划:根据应用需求规划集群规模,选择合适的网络和存储方案
  2. 应用设计:设计符合云原生理念的应用,使用微服务架构
  3. 资源管理:合理规划和限制资源使用,避免资源争用
  4. 安全加固:配置RBAC、网络策略和TLS,提高集群安全性
  5. 监控系统:部署Prometheus和Grafana,实时监控集群状态
  6. 日志管理:使用ELK Stack集中管理和分析日志
  7. CI/CD集成:与CI/CD系统集成,实现自动化部署
  8. 备份策略:定期备份集群数据,确保灾难恢复能力
  9. 文档化:为集群配置和部署过程创建详细文档,便于维护和故障排查

注意事项#

  1. 版本管理:确保使用兼容的Kubernetes版本
  2. 网络配置:合理配置网络,确保Pod间通信安全
  3. 存储性能:选择性能合适的存储方案,避免存储成为瓶颈
  4. 安全审计:定期进行安全审计,发现和修复安全问题
  5. 容量规划:根据应用负载进行容量规划,确保集群能够应对峰值流量
  6. 升级策略:制定合理的集群升级策略,避免升级过程中的服务中断
  7. 故障演练:定期进行故障演练,提高集群的容错能力

通过合理学习和使用Kubernetes,您可以构建高可用、可扩展的容器编排平台,为应用提供稳定可靠的运行环境。Kubernetes提供了强大的功能,适用于从小型应用到大型企业级应用的各种场景。