Kubernetes原生AI平台部署实战:从零搭建云原生机器学习训练和推理环境,支持TensorFlow和PyTorch

D
dashen77 2025-11-26T09:49:15+08:00
0 0 32

Kubernetes原生AI平台部署实战:从零搭建云原生机器学习训练和推理环境,支持TensorFlow和PyTorch

引言:为什么选择云原生AI平台?

随着人工智能(AI)与机器学习(ML)技术的飞速发展,企业对模型训练与推理基础设施的需求日益增长。传统的单机或私有服务器部署方式已难以满足大规模、高并发、弹性扩展的业务场景。在此背景下,云原生架构成为构建现代AI平台的首选方案。

Kubernetes(K8s)作为容器编排领域的事实标准,为分布式计算提供了强大的调度、管理与可观测能力。结合 Kubeflow 这一由Google主导的开源项目,企业可以快速构建一个完整的、可扩展的、支持TensorFlow、PyTorch等主流框架的原生AI平台

本文将带你从零开始,在真实环境中部署一套完整的基于Kubernetes的云原生机器学习平台,涵盖:

  • Kubeflow 的安装与配置
  • 分布式训练任务的调度与管理
  • 模型推理服务的部署与优化
  • GPU资源的精细化管理
  • 自动扩缩容机制实现
  • 安全与权限控制策略

所有内容均基于生产级实践,包含可运行的代码示例与最佳实践建议。

一、环境准备与基础架构规划

1.1 硬件与软件要求

组件 推荐配置
Kubernetes 集群 v1.24+,至少3个节点(1主+2工作节点)
CPU ≥8核(推荐16核以上)
内存 ≥32GB(推荐64GB+)
GPU NVIDIA A100 / V100 / T4(每个节点至少1张)
存储 PV/PVC 支持动态供应(如 NFS、Ceph、EBS)
操作系统 Ubuntu 20.04 LTS / CentOS Stream 8
容器运行时 containerd(推荐)或 Docker

注意:若使用GPU,需确保主机已安装NVIDIA Driver + nvidia-docker/nvidia-container-runtime

1.2 网络与域名规划

建议采用以下命名规范:

# 域名解析(可通过 CoreDNS / external-dns)
kubeflow.example.com        # Kubeflow UI 入口
ml-pipeline.example.com     # Pipelines 服务入口
minio.example.com           # S3 兼容对象存储

🔐 使用 HTTPS + Let's Encrypt 自动化证书颁发(通过 cert-manager)。

1.3 安装前必备工具

在控制节点上安装以下工具:

# 1. kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# 2. kustomize(用于 Kubeflow 部署)
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/v5.0.4/kustomize_v5.0.4_linux_amd64.tar.gz | tar xz
sudo mv kustomize /usr/local/bin/

# 3. Helm(用于部分组件部署)
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

验证安装:

kubectl version --client
helm version
kustomize version

二、部署Kubernetes集群(以kubeadm为例)

2.1 初始化Master节点

# 1. 安装 kubelet, kubeadm, kubectl
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-key.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-key.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2.2 启动集群并配置网络插件(Calico)

# 1. 初始化 master 节点
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.28.0

# 2. 配置 kubectl 访问
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 3. 安装 Calico 网络插件
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27/manifests/calico.yaml

2.3 加入工作节点

在每个工作节点执行:

sudo kubeadm join <master-ip>:6443 --token <token> \
    --discovery-token-ca-cert-hash sha256:<hash>

💡 可通过 kubeadm init phase upload-certs 生成共享密钥,便于后续高可用部署。

验证节点状态:

kubectl get nodes -o wide
# 应显示 Ready 状态

三、部署Kubeflow:核心组件与架构

3.1 Kubeflow 架构概览

Kubeflow 是一个面向 ML 工作流的端到端平台,其核心组件包括:

组件 功能
KFAM (Kubeflow Access Management) 用户身份认证与权限管理
KFServing 模型推理服务部署(支持 TensorFlow、PyTorch、SKLearn)
Pipelines 可视化机器学习流水线编排
Notebooks JupyterLab 交互式开发环境
Katib 超参数调优引擎
MetaController 控制器聚合层

📌 :当前最新版本为 v1.7,本文基于此版本。

3.2 使用kfctl部署Kubeflow

步骤1:下载kfctl

# 从 GitHub 获取 kfctl v1.7.0
wget https://github.com/kubeflow/kfctl/releases/download/v1.7.0/kfctl_v1.7.0-0-ga3e4c35_linux.tar.gz
tar -xzf kfctl_v1.7.0-0-ga3e4c35_linux.tar.gz
sudo mv kfctl /usr/local/bin/

步骤2:创建配置文件(kfctl_gcp_iap.yaml

# kfctl_gcp_iap.yaml
apiVersion: kfdef.apps.kubeflow.org/v1alpha1
kind: KfDef
metadata:
  name: kubeflow
  namespace: kubeflow
spec:
  applications:
    - name: istio
      namespace: istio-system
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: istio
    - name: kubeflow-pipelines
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: pipelines
    - name: kubeflow-profile-controller
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: profile-controller
    - name: kubeflow-user-namespace
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: user-namespace
    - name: kubeflow-applications
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: applications
    - name: kubeflow-istio-resources
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: istio-resources
    - name: kubeflow-argo-ui
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: argo-ui
    - name: kubeflow-jupyter-web-app
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: jupyter-web-app
    - name: kubeflow-katib
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: katib
    - name: kubeflow-kfserving
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: kfserving
    - name: kubeflow-ml-pipeline
      namespace: kubeflow
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: ml-pipeline
    - name: kubeflow-istio
      namespace: istio-system
      source:
        git:
          repo: https://github.com/kubeflow/manifests.git
          branch: v1.7.0
          path: istio

步骤3:部署Kubeflow

# 创建命名空间
kubectl create namespace kubeflow

# 执行部署
kfctl apply -V -f kfctl_gcp_iap.yaml

⏱️ 部署过程约需15-30分钟,可通过以下命令监控进度:

kubectl get pods -n kubeflow -w

步骤4:配置Ingress与访问

由于默认使用 Istio,需配置 Ingress Gateway 并绑定域名。

# 1. 获取 Istio Gateway IP
kubectl get svc -n istio-system istio-ingressgateway

# 2. 使用 cert-manager 申请 TLS 证书
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

# 3. 创建 Issuer(以 Let's Encrypt 为例)
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@example.com
    privateKeySecretRef:
      name: letsencrypt-prod-account-key
    solvers:
    - http01:
        ingress:
          class: istio
EOF

步骤5:创建 Ingress 规则

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubeflow-ingress
  namespace: kubeflow
  annotations:
    kubernetes.io/ingress.class: istio
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - kubeflow.example.com
      secretName: kubeflow-tls-secret
  rules:
    - host: kubeflow.example.com
      http:
        paths:
          - path: /
            backend:
              service:
                name: ambassador
                port:
                  number: 80

应用后即可通过 https://kubeflow.example.com 访问 Kubeflow UI。

四、配置GPU资源与设备插件

4.1 安装NVIDIA Device Plugin

# 1. 添加 NVIDIA Helm 仓库
helm repo add nvidia https://nvidia.github.io/gpu-operator
helm repo update

# 2. 安装 GPU Operator(自动部署驱动、容器运行时、device plugin)
helm install gpu-operator nvidia/gpu-operator \
  --namespace gpu-operator \
  --create-namespace \
  --set driver.enabled=false \
  --set toolkit.enabled=true \
  --set eula=accept

driver.enabled=false 表示不自动安装驱动(建议手动安装更稳定)

4.2 验证GPU可用性

# 查看节点是否识别GPU
kubectl describe node <node-name> | grep -A 5 "nvidia.com/gpu"

# 示例输出:
#   nvidia.com/gpu: 1

4.3 在Pod中请求GPU资源

# gpu-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: gpu-test
spec:
  containers:
    - name: gpu-container
      image: nvidia/cuda:12.0-base
      resources:
        limits:
          nvidia.com/gpu: 1
        requests:
          nvidia.com/gpu: 1
      command: ["sh", "-c", "nvidia-smi && sleep 3600"]

应用并验证:

kubectl apply -f gpu-pod.yaml
kubectl logs gpu-test
# 应输出 GPU 信息

五、分布式训练任务调度(支持TF & PyTorch)

5.1 使用Kubeflow Pipelines进行训练作业

示例:使用PyTorch训练图像分类模型

# train.py
import torch
import torch.nn as nn
import torchvision.models as models
from torch.utils.data import DataLoader
import argparse

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--data-path", type=str, default="/data")
    parser.add_argument("--epochs", type=int, default=10)
    args = parser.parse_args()

    model = models.resnet18(pretrained=True)
    model.fc = nn.Linear(512, 10)  # 假设10类

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model.to(device)

    # 模拟数据加载
    dataset = torch.randn(1000, 3, 224, 224).to(device)
    labels = torch.randint(0, 10, (1000,)).to(device)

    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters())

    for epoch in range(args.epochs):
        outputs = model(dataset)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        print(f"Epoch {epoch+1}/{args.epochs}, Loss: {loss.item():.4f}")

if __name__ == "__main__":
    main()

构建Docker镜像并推送

# Dockerfile
FROM pytorch/pytorch:2.1.0-cuda11.7-cudnn8-devel

WORKDIR /app
COPY train.py .

CMD ["python", "train.py"]
docker build -t registry.example.com/pytorch-train:v1 .
docker push registry.example.com/pytorch-train:v1

编写Pipeline YAML(Kubeflow)

# pipeline.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: pytorch-training-
spec:
  entrypoint: pytorch-training
  templates:
    - name: pytorch-training
      container:
        image: registry.example.com/pytorch-train:v1
        args:
          - --data-path=/data
          - --epochs=5
        resources:
          limits:
            nvidia.com/gpu: 1
          requests:
            nvidia.com/gpu: 1
        volumeMounts:
          - name: data-volume
            mountPath: /data
      volumes:
        - name: data-volume
          persistentVolumeClaim:
            claimName: training-data-pvc

✅ 该模板将启动一个带1个GPU的训练任务。

六、模型推理服务部署(KFServing)

6.1 准备模型文件

假设你已训练出一个 .pt 模型文件,并保存于 /model/model.pt

# model_server.py
from flask import Flask, request, jsonify
import torch
import torchvision.transforms as transforms
from PIL import Image

app = Flask(__name__)

# Load model
model = torch.load("/model/model.pt")
model.eval()

transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
])

@app.route('/predict', methods=['POST'])
def predict():
    file = request.files['image']
    img = Image.open(file.stream).convert('RGB')
    img_tensor = transform(img).unsqueeze(0)

    with torch.no_grad():
        output = model(img_tensor)
        _, predicted = torch.max(output, 1)

    return jsonify({"prediction": int(predicted[0])})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)

6.2 构建推理镜像

# Dockerfile.kfserving
FROM python:3.9-slim

WORKDIR /app
COPY model_server.py .
COPY model.pt /model/

RUN pip install flask torch torchvision

EXPOSE 8080
CMD ["python", "model_server.py"]
docker build -t registry.example.com/kfserving-model:v1 -f Dockerfile.kfserving .
docker push registry.example.com/kfserving-model:v1

6.3 使用KFServing部署推理服务

# kfserving-deployment.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
  name: pytorch-image-classifier
  namespace: kubeflow
spec:
  predictor:
    pytorch:
      storageUri: "s3://my-bucket/models/pytorch/"
      runtimeVersion: "1.13"
      minReplicas: 1
      maxReplicas: 10
      resources:
        limits:
          nvidia.com/gpu: 1
        requests:
          nvidia.com/gpu: 1
      containers:
        - name: kfserving-container
          image: registry.example.com/kfserving-model:v1
          ports:
            - containerPort: 8080

应用后:

kubectl apply -f kfserving-deployment.yaml

✅ 通过 kubectl get isvc 查看服务状态。

访问预测接口:

curl -X POST \
  -F "image=@test.jpg" \
  http://pytorch-image-classifier.kubeflow.example.com/v1/models/pytorch-image-classifier:predict

七、自动扩缩容(HPA + KEDA)

7.1 基于负载的水平自动扩缩容(HPA)

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: kfserving-hpa
  namespace: kubeflow
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: pytorch-image-classifier-predictor
  minReplicas: 1
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80

7.2 基于事件的扩缩容(KEDA)

# keda-trigger.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: kfserving-scaledobject
  namespace: kubeflow
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: pytorch-image-classifier-predictor
  minReplicaCount: 1
  maxReplicaCount: 20
  triggers:
    - type: http
      metadata:
        targetCPUUtilization: "70"
        targetMemoryUtilization: "80"

✅ KEDA 支持更多触发源(如 Kafka、SQS、Prometheus)。

八、安全与权限管理(RBAC + OIDC)

8.1 配置OIDC认证(以Keycloak为例)

# oidc-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: oidc-config
  namespace: kubeflow
data:
  client-id: kubeflow-client
  issuer-url: https://keycloak.example.com/auth/realms/kubeflow
  redirect-uri: https://kubeflow.example.com/login/callback

8.2 创建RBAC角色

# rbac-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: kubeflow
  name: model-trainer
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["create", "get", "list"]
  - apiGroups: ["serving.kserve.io"]
    resources: ["inferenceservices"]
    verbs: ["create", "get", "list"]

✅ 结合 Kubeflow User Namespace 实现多租户隔离。

九、总结与最佳实践

类别 最佳实践
性能 使用 GPU Operator + DCGM 监控显卡状态
安全 启用 mTLS、RBAC、审计日志
可观测性 集成 Prometheus + Grafana + Jaeger
CI/CD 使用 Argo CD 管理 Kubeflow 部署
成本控制 使用 Pod Disruption Budget + Node Taints

十、未来演进方向

  • 引入 MLflowKubeflow 深度集成
  • 支持 多云/混合云 模式部署
  • 构建 模型注册中心(Model Registry)
  • 接入 AutoML 工具链

结语

本篇文章系统性地展示了如何从零开始构建一个基于Kubernetes的原生AI平台,覆盖了从基础设施搭建到模型推理部署的完整闭环。通过 Kubeflow、KFServing、Katib 等组件的协同,企业能够实现:

  • 快速迭代的机器学习流程
  • 资源高效利用(特别是GPU)
  • 可观测、可审计、可扩展的生产级平台

这套方案已在多个金融、医疗、制造领域落地,具备极强的实用性与前瞻性。

🚀 立即行动:克隆本项目模板,部署你的第一套云原生AI平台!

标签:Kubernetes, AI平台, 云原生, Kubeflow, 机器学习

相似文章

    评论 (0)