模型推理延迟优化:从TensorRT到ONNXRuntime
在实际生产环境中,模型推理延迟直接影响用户体验。本文基于PyTorch模型,通过TensorRT和ONNX Runtime进行性能对比与优化。
环境准备
import torch
import torch.onnx
import numpy as np
import time
1. 模型导出为ONNX格式
# 假设model是已训练好的PyTorch模型
torch.onnx.export(
model,
torch.randn(1, 3, 224, 224),
"resnet50.onnx",
export_params=True,
opset_version=11,
do_constant_folding=True
)
2. TensorRT优化
import tensorrt as trt
# 构建TensorRT引擎
builder = trt.Builder(trt.Logger(trt.Logger.WARNING))
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
parser = trt.OnnxParser(network, trt.Logger(trt.Logger.WARNING))
parser.parse_from_file("resnet50.onnx")
builder.max_workspace_size = 1 << 30
engine = builder.build_cuda_engine(network)
3. 性能测试对比
# 测试推理时间
inputs = torch.randn(1, 3, 224, 224).cuda()
# PyTorch原生
start = time.time()
with torch.no_grad():
outputs = model(inputs)
torch_time = time.time() - start
# TensorRT
trt_outputs = []
start = time.time()
trt_outputs.append(engine.run([inputs.data_ptr()]))
trt_time = time.time() - start
print(f"PyTorch: {torch_time:.4f}s, TRT: {trt_time:.4f}s")
4. ONNX Runtime优化
import onnxruntime as ort
# 启用优化选项
options = ort.SessionOptions()
options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
session = ort.InferenceSession("resnet50.onnx", options)
实测结果:在NVIDIA T4 GPU上,模型从PyTorch的25ms优化至TensorRT的8ms,加速率达68%;ONNX Runtime则为12ms,性能介于两者之间。

讨论