基于Kubernetes的云原生应用部署实战:从Docker到Helm的完整流程指南

Arthur481
Arthur481 2026-02-04T11:13:10+08:00
0 0 0

引言

在当今快速发展的云计算时代,云原生应用已成为企业数字化转型的核心驱动力。Kubernetes作为容器编排领域的事实标准,为企业提供了强大的应用部署、扩展和管理能力。本文将通过一个完整的实战案例,详细介绍从Docker容器化到Helm Chart管理的云原生应用部署全流程。

什么是云原生应用

云原生应用是指专门为云计算环境设计和构建的应用程序,它们充分利用云平台提供的弹性、可扩展性和分布式特性。云原生应用通常具有以下特征:

  • 容器化:应用被打包成轻量级的容器,确保环境一致性
  • 微服务架构:将复杂应用拆分为独立的服务模块
  • 动态编排:通过自动化工具管理应用的部署和扩展
  • 弹性伸缩:根据负载自动调整资源分配

环境准备

在开始之前,我们需要准备以下环境:

1. Docker环境

# 检查Docker版本
docker --version

# 启动Docker服务
sudo systemctl start docker
sudo systemctl enable docker

2. Kubernetes集群

# 使用kind创建本地测试集群
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# 创建集群配置文件
cat > cluster-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF

# 创建集群
kind create cluster --config cluster-config.yaml --name my-cluster

3. Helm工具

# 安装Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# 验证安装
helm version

第一步:应用容器化(Docker)

1. 创建简单的Web应用

首先,我们创建一个简单的Node.js Web应用:

// app.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({
    message: 'Hello from Kubernetes!',
    timestamp: new Date().toISOString(),
    hostname: require('os').hostname()
  });
});

app.get('/health', (req, res) => {
  res.status(200).json({ status: 'healthy' });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

2. 创建package.json

{
  "name": "k8s-app",
  "version": "1.0.0",
  "description": "Kubernetes demo application",
  "main": "app.js",
  "scripts": {
    "start": "node app.js",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "dependencies": {
    "express": "^4.18.2"
  }
}

3. 编写Dockerfile

# Dockerfile
FROM node:18-alpine

# 设置工作目录
WORKDIR /app

# 复制package文件
COPY package*.json ./

# 安装依赖
RUN npm ci --only=production

# 复制应用代码
COPY . .

# 暴露端口
EXPOSE 3000

# 创建非root用户
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs

# 健康检查
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

# 启动应用
CMD ["npm", "start"]

4. 构建和推送Docker镜像

# 构建镜像
docker build -t my-k8s-app:v1.0 .

# 标记镜像为本地仓库格式
docker tag my-k8s-app:v1.0 localhost:5000/my-k8s-app:v1.0

# 启动本地Docker Registry(可选)
docker run -d -p 5000:5000 --name registry registry:2

# 推送镜像到本地仓库
docker push localhost:5000/my-k8s-app:v1.0

第二步:Kubernetes集群基础配置

1. 验证集群状态

# 查看集群信息
kubectl cluster-info

# 查看节点状态
kubectl get nodes

# 查看Pods状态
kubectl get pods --all-namespaces

2. 创建命名空间

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-app-ns
  labels:
    name: my-app-ns
# 应用配置
kubectl apply -f namespace.yaml

第三步:Deployment配置详解

1. 创建Deployment资源

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  namespace: my-app-ns
  labels:
    app: my-app
    version: v1.0
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
        version: v1.0
    spec:
      containers:
      - name: my-app-container
        image: localhost:5000/my-k8s-app:v1.0
        ports:
        - containerPort: 3000
          name: http
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
        env:
        - name: NODE_ENV
          value: "production"

2. 应用Deployment

# 应用Deployment
kubectl apply -f deployment.yaml

# 查看Deployment状态
kubectl get deployments -n my-app-ns

# 查看Pods状态
kubectl get pods -n my-app-ns

# 查看详细信息
kubectl describe deployment my-app-deployment -n my-app-ns

3. Deployment最佳实践

资源限制配置

resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

健康检查配置

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5

第四步:Service配置与暴露

1. 创建ClusterIP Service

# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-clusterip
  namespace: my-app-ns
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
  type: ClusterIP

2. 创建NodePort Service

# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-nodeport
  namespace: my-app-ns
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 30007
    protocol: TCP
  type: NodePort

3. 创建LoadBalancer Service

# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-loadbalancer
  namespace: my-app-ns
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
  type: LoadBalancer

4. 应用Service配置

# 应用ClusterIP Service
kubectl apply -f clusterip-service.yaml

# 应用NodePort Service
kubectl apply -f nodeport-service.yaml

# 查看Service信息
kubectl get services -n my-app-ns

# 查看NodePort端口
kubectl get svc my-app-nodeport -n my-app-ns -o jsonpath='{.spec.ports[0].nodePort}'

第五步:Helm Chart管理

1. 创建Helm Chart结构

# 创建新的Helm Chart
helm create my-k8s-app-chart

# 查看目录结构
tree my-k8s-app-chart/

2. 修改Chart.yaml文件

# my-k8s-app-chart/Chart.yaml
apiVersion: v2
name: my-k8s-app-chart
description: A Helm chart for Kubernetes deployment of my-k8s-app
type: application
version: 0.1.0
appVersion: "1.0.0"

3. 配置values.yaml

# my-k8s-app-chart/values.yaml
# Default values for my-k8s-app-chart.

replicaCount: 3

image:
  repository: localhost:5000/my-k8s-app
  tag: "v1.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

resources:
  limits:
    cpu: 500m
    memory: 128Mi
  requests:
    cpu: 250m
    memory: 64Mi

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5

env:
  - name: NODE_ENV
    value: "production"

4. 配置Deployment模板

# my-k8s-app-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "my-k8s-app-chart.fullname" . }}
  labels:
    {{- include "my-k8s-app-chart.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "my-k8s-app-chart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "my-k8s-app-chart.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        ports:
        - containerPort: 3000
          name: http
          protocol: TCP
        livenessProbe:
          {{- toYaml .Values.livenessProbe | nindent 12 }}
        readinessProbe:
          {{- toYaml .Values.readinessProbe | nindent 12 }}
        resources:
          {{- toYaml .Values.resources | nindent 12 }}
        env:
        {{- toYaml .Values.env | nindent 10 }}

5. 配置Service模板

# my-k8s-app-chart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ include "my-k8s-app-chart.fullname" . }}
  labels:
    {{- include "my-k8s-app-chart.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
  - port: {{ .Values.service.port }}
    targetPort: 3000
    protocol: TCP
    name: http
  selector:
    {{- include "my-k8s-app-chart.selectorLabels" . | nindent 4 }}

6. 配置通用模板

# my-k8s-app-chart/templates/_helpers.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "my-k8s-app-chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "my-k8s-app-chart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "my-k8s-app-chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "my-k8s-app-chart.labels" -}}
helm.sh/chart: {{ include "my-k8s-app-chart.chart" . }}
{{ include "my-k8s-app-chart.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "my-k8s-app-chart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "my-k8s-app-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

7. 部署Helm Chart

# 打包Chart
helm package my-k8s-app-chart/

# 安装Chart
helm install my-app ./my-k8s-app-chart-0.1.0.tgz -n my-app-ns

# 查看安装的Release
helm list -n my-app-ns

# 查看详细信息
helm get all my-app -n my-app-ns

# 升级Chart
helm upgrade my-app ./my-k8s-app-chart-0.1.0.tgz -n my-app-ns --set replicaCount=5

第六步:高级配置与优化

1. 配置ConfigMap和Secret

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-app-config
  namespace: my-app-ns
data:
  app.properties: |
    server.port=3000
    logging.level=INFO
  database.url: "jdbc:mysql://db-service:3306/myapp"
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-app-secret
  namespace: my-app-ns
type: Opaque
data:
  # base64 encoded values
  db-password: cGFzc3dvcmQxMjM=
  api-key: YWJjZGVmZ2hpams=

2. 更新Deployment以使用ConfigMap和Secret

# updated-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  namespace: my-app-ns
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: localhost:5000/my-k8s-app:v1.0
        ports:
        - containerPort: 3000
        envFrom:
        - configMapRef:
            name: my-app-config
        - secretRef:
            name: my-app-secret
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

3. 配置Ingress控制器

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  namespace: my-app-ns
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-clusterip
            port:
              number: 80

第七步:监控与调试

1. 日志查看

# 查看Pod日志
kubectl logs -l app=my-app -n my-app-ns

# 实时查看日志
kubectl logs -l app=my-app -n my-app-ns -f

# 查看特定Pod日志
kubectl logs my-app-deployment-7b5b8c9d4-xyz12 -n my-app-ns

2. 资源监控

# 查看Pod资源使用情况
kubectl top pods -n my-app-ns

# 查看节点资源使用情况
kubectl top nodes

# 查看详细资源配额
kubectl describe pod my-app-deployment-7b5b8c9d4-xyz12 -n my-app-ns

3. 端口转发调试

# 端口转发到本地
kubectl port-forward svc/my-app-nodeport 8080:80 -n my-app-ns

# 访问应用
curl http://localhost:8080/

最佳实践总结

1. 镜像管理最佳实践

image:
  repository: registry.example.com/my-app
  tag: "v1.0.0"
  pullPolicy: IfNotPresent

2. 资源配置最佳实践

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "256Mi"
    cpu: "200m"

3. 健康检查最佳实践

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3

readinessProbe:
  httpGet:
    path: /ready
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5

完整部署流程总结

通过本文的详细教程,我们完成了从Docker容器化到Helm Chart管理的完整云原生应用部署流程:

  1. 应用容器化:创建Dockerfile并构建应用镜像
  2. Kubernetes基础配置:搭建集群环境和命名空间
  3. Deployment配置:定义应用部署策略和资源限制
  4. Service暴露:配置不同类型的Service来暴露应用
  5. Helm Chart管理:使用Helm实现应用的可重复部署
  6. 高级优化:配置ConfigMap、Secret和Ingress
  7. 监控调试:建立完整的监控和调试机制

未来扩展方向

1. CI/CD集成

# Jenkins Pipeline示例
pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'docker build -t my-app:${BUILD_NUMBER} .'
                sh 'docker push my-app:${BUILD_NUMBER}'
            }
        }
        stage('Deploy') {
            steps {
                sh 'helm upgrade --install my-app ./my-k8s-app-chart --set image.tag=${BUILD_NUMBER}'
            }
        }
    }
}

2. 多环境管理

# 使用不同values文件部署到不同环境
helm install prod-app ./my-k8s-app-chart -f values-prod.yaml
helm install dev-app ./my-k8s-app-chart -f values-dev.yaml

结论

本文通过完整的实战案例,详细介绍了云原生应用从Docker容器化到Kubernetes部署的全过程。从基础的环境搭建到复杂的Helm Chart管理,涵盖了云原生应用部署的核心技术点。通过实践这些技术和最佳实践,企业可以快速实现容器化转型,构建更加弹性、可扩展的应用架构。

随着云原生技术的不断发展,掌握这些核心技术将成为现代开发团队的重要技能。建议在实际项目中逐步应用这些实践,并根据具体需求进行相应的优化和调整。

通过持续的学习和实践,开发者将能够更好地利用Kubernetes和Helm等工具,构建出更加健壮、高效的云原生应用,为企业的数字化转型提供强有力的技术支撑。

相关推荐
广告位招租

相似文章

    评论 (0)

    0/2000