数据库连接池异常处理机制深度解析:HikariCP与Druid监控告警最佳实践

Will424
Will424 2026-01-17T17:02:23+08:00
0 0 1

引言

在现代企业级应用开发中,数据库连接池作为连接管理的核心组件,承担着连接复用、资源控制和性能优化的重要职责。然而,由于网络环境的复杂性、数据库服务器的稳定性问题以及应用本身的并发压力,连接池异常问题时常发生,严重影响系统稳定性和业务连续性。

本文将深入分析数据库连接池的常见异常场景,详细介绍HikariCP和Druid连接池的异常处理机制,包括连接泄漏检测、超时处理、断线重连等关键技术,并提供完善的监控告警方案和故障排查指南,帮助开发者构建高可用的数据库连接管理解决方案。

数据库连接池核心概念与异常类型

连接池基本原理

数据库连接池是一种复用数据库连接的技术,通过预先创建一定数量的连接对象并维护在内存中,当应用程序需要访问数据库时,直接从连接池中获取连接,使用完毕后将连接归还给连接池,而不是关闭连接。这种方式有效减少了频繁创建和销毁连接的开销,提升了系统性能。

常见异常类型分析

数据库连接池在实际运行过程中可能遇到以下几类异常:

  1. 连接泄漏:应用程序获取连接后未正确关闭,导致连接无法归还到连接池
  2. 连接超时:获取连接或执行SQL操作超过设定的超时时间
  3. 断线重连:数据库服务器重启或网络中断导致的连接失效
  4. 资源耗尽:连接池中的所有连接都被占用,新的请求无法获得连接
  5. 死锁异常:连接池内部或应用层面的死锁问题

HikariCP异常处理机制详解

HikariCP基础配置与异常检测

HikariCP作为当前最流行的高性能连接池之一,其异常处理机制设计精良。让我们从基本配置开始分析:

@Configuration
public class HikariConfig {
    
    @Bean
    public DataSource dataSource() {
        HikariConfig config = new HikariConfig();
        
        // 基础连接配置
        config.setJdbcUrl("jdbc:mysql://localhost:3306/testdb");
        config.setUsername("username");
        config.setPassword("password");
        config.setDriverClassName("com.mysql.cj.jdbc.Driver");
        
        // 连接池配置
        config.setMaximumPoolSize(20);
        config.setMinimumIdle(5);
        config.setConnectionTimeout(30000); // 30秒
        config.setIdleTimeout(600000);      // 10分钟
        config.setMaxLifetime(1800000);     // 30分钟
        
        // 异常检测配置
        config.setLeakDetectionThreshold(60000); // 60秒连接泄漏检测
        config.setValidationTimeout(5000);       // 5秒验证超时
        config.setConnectionTestQuery("SELECT 1"); // 连接测试查询
        
        return new HikariDataSource(config);
    }
}

连接泄漏检测机制

HikariCP的连接泄漏检测是其核心特性之一。当连接在池中存活时间超过leakDetectionThreshold设置的时间时,连接池会记录警告信息:

// 配置连接泄漏检测
config.setLeakDetectionThreshold(30000); // 30秒

// 在应用启动时启用连接泄漏检测
public class ConnectionLeakDetector {
    public void setupLeakDetection() {
        HikariConfig config = new HikariConfig();
        config.setLeakDetectionThreshold(30000);
        
        HikariDataSource dataSource = new HikariDataSource(config);
        
        // 监控连接泄漏
        dataSource.getHikariPoolMXBean().setLeakDetectionThreshold(30000);
    }
}

超时处理机制

HikariCP提供了多层次的超时控制机制:

@Configuration
public class TimeoutConfig {
    
    @Bean
    public DataSource dataSource() {
        HikariConfig config = new HikariConfig();
        
        // 获取连接超时时间(毫秒)
        config.setConnectionTimeout(30000);
        
        // 验证连接超时时间(毫秒)
        config.setValidationTimeout(5000);
        
        // 连接最大生命周期(毫秒)
        config.setMaxLifetime(1800000);
        
        // 空闲连接超时时间(毫秒)
        config.setIdleTimeout(600000);
        
        return new HikariDataSource(config);
    }
}

连接验证与断线重连

HikariCP通过定期验证连接有效性来处理断线问题:

@Component
public class ConnectionValidator {
    
    @Autowired
    private DataSource dataSource;
    
    public boolean validateConnection() {
        try (Connection connection = dataSource.getConnection()) {
            // 执行简单的查询验证连接
            PreparedStatement ps = connection.prepareStatement("SELECT 1");
            ResultSet rs = ps.executeQuery();
            return rs.next();
        } catch (SQLException e) {
            log.error("Connection validation failed", e);
            return false;
        }
    }
    
    public void handleConnectionFailure() {
        // 重新初始化连接池
        HikariDataSource hikariDS = (HikariDataSource) dataSource;
        hikariDS.close();
        
        // 重新创建连接池
        HikariConfig config = new HikariConfig();
        // 重新配置...
        HikariDataSource newDataSource = new HikariDataSource(config);
    }
}

Druid连接池异常处理机制详解

Druid基础配置与监控特性

Druid是阿里巴巴开源的数据库连接池实现,以其强大的监控和扩展能力著称:

@Configuration
public class DruidConfig {
    
    @Bean
    @Primary
    public DataSource dataSource() {
        DruidDataSource dataSource = new DruidDataSource();
        
        // 基础配置
        dataSource.setUrl("jdbc:mysql://localhost:3306/testdb");
        dataSource.setUsername("username");
        dataSource.setPassword("password");
        dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
        
        // 连接池配置
        dataSource.setInitialSize(5);
        dataSource.setMinIdle(5);
        dataSource.setMaxActive(20);
        
        // 配置监控
        dataSource.setValidationQuery("SELECT 1");
        dataSource.setTestWhileIdle(true);
        dataSource.setTestOnBorrow(false);
        dataSource.setTestOnReturn(false);
        
        // 配置连接泄漏检测
        dataSource.setRemoveAbandoned(true);
        dataSource.setRemoveAbandonedTimeout(60); // 60秒
        dataSource.setLogAbandoned(true);
        
        // 配置监控统计
        dataSource.setFilters("stat,wall,log4j");
        
        return dataSource;
    }
}

Druid连接泄漏检测

Druid提供了更加详细的连接泄漏检测功能:

@Component
public class DruidLeakDetector {
    
    @Autowired
    private DataSource dataSource;
    
    public void configureLeakDetection() {
        DruidDataSource druidDS = (DruidDataSource) dataSource;
        
        // 启用连接泄漏检测
        druidDS.setRemoveAbandoned(true);
        druidDS.setRemoveAbandonedTimeout(30);  // 30秒超时
        druidDS.setLogAbandoned(true);          // 记录日志
        
        // 配置监控统计
        DruidStatManagerFacade statManager = DruidStatManagerFacade.getInstance();
        statManager.reset();
    }
    
    public void printConnectionStatus() {
        DruidDataSource druidDS = (DruidDataSource) dataSource;
        System.out.println("Active connections: " + druidDS.getActiveCount());
        System.out.println("Idle connections: " + druidDS.getIdleCount());
        System.out.println("Total connections: " + druidDS.getCreateCount());
    }
}

Druid监控与告警配置

Druid提供了丰富的监控功能,可以集成到各种监控系统中:

@Configuration
public class DruidMonitorConfig {
    
    @Bean
    public ServletRegistrationBean<StatViewServlet> statViewServlet() {
        StatViewServlet servlet = new StatViewServlet();
        ServletRegistrationBean<StatViewServlet> bean = new ServletRegistrationBean<>(servlet);
        
        bean.setUrlMappings("/druid/*");
        bean.setInitParameters(new HashMap<String, String>() {{
            put("loginUsername", "admin");
            put("loginPassword", "password");
            put("resetEnable", "false");
            put("allow", ""); // 允许所有IP访问
        }});
        
        return bean;
    }
    
    @Bean
    public FilterRegistrationBean<WebStatFilter> webStatFilter() {
        WebStatFilter filter = new WebStatFilter();
        FilterRegistrationBean<WebStatFilter> bean = new FilterRegistrationBean<>(filter);
        
        bean.setUrlPatterns("/*");
        bean.setInitParameters(new HashMap<String, String>() {{
            put("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
        }});
        
        return bean;
    }
}

异常处理最佳实践

连接泄漏防护策略

@Component
public class ConnectionLeakProtection {
    
    private static final Logger logger = LoggerFactory.getLogger(ConnectionLeakProtection.class);
    
    public void safeExecuteWithConnection(String sql, ConnectionCallback callback) {
        Connection connection = null;
        try {
            connection = getConnection();
            callback.doInConnection(connection);
        } catch (SQLException e) {
            logger.error("Database operation failed", e);
            throw new RuntimeException(e);
        } finally {
            // 确保连接被正确关闭
            closeConnection(connection);
        }
    }
    
    private Connection getConnection() throws SQLException {
        return dataSource.getConnection();
    }
    
    private void closeConnection(Connection connection) {
        if (connection != null) {
            try {
                connection.close();
            } catch (SQLException e) {
                logger.warn("Failed to close connection", e);
            }
        }
    }
}

@FunctionalInterface
interface ConnectionCallback {
    void doInConnection(Connection connection) throws SQLException;
}

超时处理与重试机制

@Component
public class TimeoutRetryHandler {
    
    private static final Logger logger = LoggerFactory.getLogger(TimeoutRetryHandler.class);
    
    public <T> T executeWithRetry(Supplier<T> operation, int maxRetries, long retryDelay) {
        Exception lastException = null;
        
        for (int i = 0; i <= maxRetries; i++) {
            try {
                return operation.get();
            } catch (Exception e) {
                lastException = e;
                
                if (i >= maxRetries) {
                    throw new RuntimeException("Operation failed after " + maxRetries + " retries", e);
                }
                
                logger.warn("Operation failed, retrying in {}ms: {}", retryDelay, e.getMessage());
                try {
                    Thread.sleep(retryDelay);
                } catch (InterruptedException ie) {
                    Thread.currentThread().interrupt();
                    throw new RuntimeException("Interrupted during retry", ie);
                }
            }
        }
        
        throw new RuntimeException("Unexpected execution path", lastException);
    }
    
    public void executeWithTimeout(String sql, int timeoutSeconds) {
        try {
            // 使用线程池执行超时控制
            ExecutorService executor = Executors.newSingleThreadExecutor();
            Future<?> future = executor.submit(() -> {
                // 执行数据库操作
                executeSql(sql);
            });
            
            future.get(timeoutSeconds, TimeUnit.SECONDS);
            executor.shutdown();
        } catch (TimeoutException e) {
            logger.error("Database operation timed out after {} seconds", timeoutSeconds);
            throw new RuntimeException("Operation timeout", e);
        } catch (Exception e) {
            logger.error("Database operation failed", e);
            throw new RuntimeException("Operation failed", e);
        }
    }
}

断线重连与故障转移

@Component
public class ConnectionRecoveryHandler {
    
    private static final Logger logger = LoggerFactory.getLogger(ConnectionRecoveryHandler.class);
    
    @Autowired
    private DataSource dataSource;
    
    public boolean attemptReconnect() {
        try {
            // 验证连接是否可用
            if (isConnectionValid()) {
                return true;
            }
            
            // 尝试重新初始化连接池
            logger.info("Attempting connection recovery...");
            recoverDataSource();
            return isConnectionValid();
        } catch (Exception e) {
            logger.error("Failed to recover connection", e);
            return false;
        }
    }
    
    private boolean isConnectionValid() {
        try (Connection connection = dataSource.getConnection()) {
            return connection != null && !connection.isClosed();
        } catch (SQLException e) {
            logger.warn("Connection validation failed", e);
            return false;
        }
    }
    
    private void recoverDataSource() throws SQLException {
        HikariDataSource hikariDS = (HikariDataSource) dataSource;
        
        // 临时关闭连接池
        hikariDS.close();
        
        // 等待一段时间后重新初始化
        try {
            Thread.sleep(5000);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        
        // 重新创建连接池
        HikariConfig config = new HikariConfig();
        // 重新加载配置...
        hikariDS = new HikariDataSource(config);
    }
}

监控告警系统设计

基础监控指标收集

@Component
public class ConnectionPoolMonitor {
    
    private static final Logger logger = LoggerFactory.getLogger(ConnectionPoolMonitor.class);
    
    @Autowired
    private DataSource dataSource;
    
    // 收集连接池关键指标
    public Map<String, Object> collectMetrics() {
        Map<String, Object> metrics = new HashMap<>();
        
        try {
            if (dataSource instanceof HikariDataSource) {
                HikariDataSource hikariDS = (HikariDataSource) dataSource;
                HikariPoolMXBean poolBean = hikariDS.getHikariPoolMXBean();
                
                metrics.put("activeConnections", poolBean.getActiveConnections());
                metrics.put("idleConnections", poolBean.getIdleConnections());
                metrics.put("totalConnections", poolBean.getTotalConnections());
                metrics.put("waitingThreads", poolBean.getThreadsAwaitingConnection());
                metrics.put("leakDetectionThreshold", poolBean.getLeakDetectionThreshold());
                
            } else if (dataSource instanceof DruidDataSource) {
                DruidDataSource druidDS = (DruidDataSource) dataSource;
                
                metrics.put("activeConnections", druidDS.getActiveCount());
                metrics.put("idleConnections", druidDS.getIdleCount());
                metrics.put("totalConnections", druidDS.getCreateCount());
                metrics.put("maxActive", druidDS.getMaxActive());
                metrics.put("removeAbandoned", druidDS.isRemoveAbandoned());
            }
            
        } catch (Exception e) {
            logger.error("Failed to collect metrics", e);
        }
        
        return metrics;
    }
    
    // 指标监控定时任务
    @Scheduled(fixedRate = 30000) // 每30秒执行一次
    public void monitorPoolStatus() {
        Map<String, Object> metrics = collectMetrics();
        
        logger.info("Connection Pool Metrics: {}", metrics);
        
        // 触发告警检查
        checkAlertConditions(metrics);
    }
}

告警规则与阈值配置

@Component
public class AlertRuleManager {
    
    private static final Logger logger = LoggerFactory.getLogger(AlertRuleManager.class);
    
    // 配置告警阈值
    @Value("${pool.alert.active.threshold:15}")
    private int activeThreshold;
    
    @Value("${pool.alert.idle.threshold:2}")
    private int idleThreshold;
    
    @Value("${pool.alert.waiting.threshold:5}")
    private int waitingThreshold;
    
    @Value("${pool.alert.leak.threshold:30000}")
    private long leakThreshold;
    
    public void checkAlertConditions(Map<String, Object> metrics) {
        try {
            Integer activeConnections = (Integer) metrics.get("activeConnections");
            Integer idleConnections = (Integer) metrics.get("idleConnections");
            Integer waitingThreads = (Integer) metrics.get("waitingThreads");
            
            if (activeConnections != null && activeConnections > activeThreshold) {
                triggerAlert("High active connections", 
                    String.format("Active connections: %d, threshold: %d", 
                        activeConnections, activeThreshold));
            }
            
            if (idleConnections != null && idleConnections < idleThreshold) {
                triggerAlert("Low idle connections", 
                    String.format("Idle connections: %d, threshold: %d", 
                        idleConnections, idleThreshold));
            }
            
            if (waitingThreads != null && waitingThreads > waitingThreshold) {
                triggerAlert("High waiting threads", 
                    String.format("Waiting threads: %d, threshold: %d", 
                        waitingThreads, waitingThreshold));
            }
            
        } catch (Exception e) {
            logger.error("Failed to check alert conditions", e);
        }
    }
    
    private void triggerAlert(String title, String message) {
        logger.warn("ALERT TRIGGERED - {}: {}", title, message);
        
        // 这里可以集成邮件、短信、微信等告警方式
        sendNotification(title, message);
    }
    
    private void sendNotification(String title, String message) {
        // 实现具体的告警通知逻辑
        System.out.println("Sending alert: " + title + " - " + message);
    }
}

可视化监控面板

@RestController
@RequestMapping("/monitor")
public class MonitorController {
    
    @Autowired
    private ConnectionPoolMonitor monitor;
    
    @Autowired
    private AlertRuleManager alertManager;
    
    @GetMapping("/metrics")
    public ResponseEntity<Map<String, Object>> getMetrics() {
        Map<String, Object> metrics = monitor.collectMetrics();
        return ResponseEntity.ok(metrics);
    }
    
    @GetMapping("/health")
    public ResponseEntity<Map<String, Object>> getHealthStatus() {
        Map<String, Object> health = new HashMap<>();
        
        try {
            Map<String, Object> metrics = monitor.collectMetrics();
            boolean healthy = checkHealthStatus(metrics);
            
            health.put("status", healthy ? "healthy" : "unhealthy");
            health.put("metrics", metrics);
            health.put("timestamp", System.currentTimeMillis());
            
        } catch (Exception e) {
            health.put("status", "error");
            health.put("error", e.getMessage());
        }
        
        return ResponseEntity.ok(health);
    }
    
    private boolean checkHealthStatus(Map<String, Object> metrics) {
        // 实现健康检查逻辑
        Integer activeConnections = (Integer) metrics.get("activeConnections");
        Integer idleConnections = (Integer) metrics.get("idleConnections");
        
        if (activeConnections == null || idleConnections == null) {
            return false;
        }
        
        // 健康状态判断:活跃连接不能过多,空闲连接不能过少
        return activeConnections <= 15 && idleConnections >= 2;
    }
}

故障排查与诊断指南

连接泄漏诊断工具

@Component
public class LeakDiagnosisTool {
    
    private static final Logger logger = LoggerFactory.getLogger(LeakDiagnosisTool.class);
    
    @Autowired
    private DataSource dataSource;
    
    // 诊断连接泄漏问题
    public void diagnoseConnectionLeaks() {
        try {
            if (dataSource instanceof HikariDataSource) {
                diagnoseHikariLeak();
            } else if (dataSource instanceof DruidDataSource) {
                diagnoseDruidLeak();
            }
        } catch (Exception e) {
            logger.error("Failed to diagnose connection leaks", e);
        }
    }
    
    private void diagnoseHikariLeak() {
        HikariDataSource hikariDS = (HikariDataSource) dataSource;
        HikariPoolMXBean poolBean = hikariDS.getHikariPoolMXBean();
        
        logger.info("Hikari Pool Diagnostics:");
        logger.info("Active Connections: {}", poolBean.getActiveConnections());
        logger.info("Idle Connections: {}", poolBean.getIdleConnections());
        logger.info("Total Connections: {}", poolBean.getTotalConnections());
        logger.info("Threads Awaiting Connection: {}", poolBean.getThreadsAwaitingConnection());
        
        // 检查是否有连接泄漏
        if (poolBean.getActiveConnections() > 0) {
            logger.warn("Potential connection leak detected!");
            // 这里可以添加更详细的诊断信息
        }
    }
    
    private void diagnoseDruidLeak() {
        DruidDataSource druidDS = (DruidDataSource) dataSource;
        
        logger.info("Druid Pool Diagnostics:");
        logger.info("Active Count: {}", druidDS.getActiveCount());
        logger.info("Idle Count: {}", druidDS.getIdleCount());
        logger.info("Create Count: {}", druidDS.getCreateCount());
        logger.info("Remove Abandoned: {}", druidDS.isRemoveAbandoned());
        
        if (druidDS.getActiveCount() > 0) {
            logger.warn("Potential connection leak detected in Druid!");
            // 可以获取具体的连接信息
            logConnectionDetails(druidDS);
        }
    }
    
    private void logConnectionDetails(DruidDataSource druidDS) {
        try {
            // 获取连接详情(如果支持)
            Field field = DruidDataSource.class.getDeclaredField("connections");
            field.setAccessible(true);
            
            Object connections = field.get(druidDS);
            logger.info("Connection details: {}", connections);
            
        } catch (Exception e) {
            logger.warn("Failed to get connection details", e);
        }
    }
}

性能瓶颈分析

@Component
public class PerformanceAnalyzer {
    
    private static final Logger logger = LoggerFactory.getLogger(PerformanceAnalyzer.class);
    
    // 分析连接池性能瓶颈
    public void analyzePerformanceBottlenecks() {
        try {
            Map<String, Object> metrics = collectPerformanceMetrics();
            
            // 分析等待时间
            analyzeWaitTime(metrics);
            
            // 分析连接利用率
            analyzeConnectionUtilization(metrics);
            
            // 分析错误率
            analyzeErrorRate(metrics);
            
        } catch (Exception e) {
            logger.error("Failed to analyze performance", e);
        }
    }
    
    private Map<String, Object> collectPerformanceMetrics() {
        Map<String, Object> metrics = new HashMap<>();
        
        try {
            // 收集各种性能指标
            if (dataSource instanceof HikariDataSource) {
                HikariPoolMXBean poolBean = ((HikariDataSource) dataSource).getHikariPoolMXBean();
                metrics.put("waitTime", poolBean.getThreadsAwaitingConnection());
                metrics.put("activeCount", poolBean.getActiveConnections());
                metrics.put("idleCount", poolBean.getIdleConnections());
            }
            
        } catch (Exception e) {
            logger.error("Failed to collect performance metrics", e);
        }
        
        return metrics;
    }
    
    private void analyzeWaitTime(Map<String, Object> metrics) {
        Integer waitTime = (Integer) metrics.get("waitTime");
        if (waitTime != null && waitTime > 10) {
            logger.warn("High waiting threads detected: {} threads waiting", waitTime);
        }
    }
    
    private void analyzeConnectionUtilization(Map<String, Object> metrics) {
        Integer active = (Integer) metrics.get("activeCount");
        Integer idle = (Integer) metrics.get("idleCount");
        
        if (active != null && idle != null) {
            int total = active + idle;
            double utilization = (double) active / total * 100;
            
            logger.info("Connection Utilization: {}%", Math.round(utilization));
            
            if (utilization > 90) {
                logger.warn("High connection utilization detected: {}%", Math.round(utilization));
            }
        }
    }
    
    private void analyzeErrorRate(Map<String, Object> metrics) {
        // 实现错误率分析逻辑
        logger.info("Performance analysis completed");
    }
}

高级优化策略

动态调整连接池参数

@Component
public class DynamicPoolConfig {
    
    private static final Logger logger = LoggerFactory.getLogger(DynamicPoolConfig.class);
    
    @Autowired
    private DataSource dataSource;
    
    // 根据负载动态调整连接池大小
    public void adjustPoolSizeBasedOnLoad() {
        try {
            Map<String, Object> metrics = collectMetrics();
            
            int currentActive = (Integer) metrics.get("activeConnections");
            int currentIdle = (Integer) metrics.get("idleConnections");
            int total = currentActive + currentIdle;
            
            if (total > 0) {
                double utilization = (double) currentActive / total;
                
                // 根据使用率动态调整
                adjustPoolSize(utilization);
            }
            
        } catch (Exception e) {
            logger.error("Failed to adjust pool size", e);
        }
    }
    
    private void adjustPoolSize(double utilization) {
        if (dataSource instanceof HikariDataSource) {
            HikariDataSource hikariDS = (HikariDataSource) dataSource;
            HikariConfig config = hikariDS.getHikariConfigMXBean();
            
            int currentMax = config.getMaximumPoolSize();
            int newMax = currentMax;
            
            if (utilization > 0.8) {
                // 使用率过高,增加连接池大小
                newMax = Math.min(currentMax + 5, 100);
                logger.info("Increasing pool size to: {}", newMax);
            } else if (utilization < 0.3 && currentMax > 10) {
                // 使用率过低,减少连接池大小
                newMax = Math.max(currentMax - 5, 10);
                logger.info("Decreasing pool size to: {}", newMax);
            }
            
            if (newMax != currentMax) {
                config.setMaximumPoolSize(newMax);
            }
        }
    }
}

连接池健康检查机制

@Component
public class HealthCheckService {
    
    private static final Logger logger = LoggerFactory.getLogger(HealthCheckService.class);
    
    @Autowired
    private DataSource dataSource;
    
    // 定期执行健康检查
    @Scheduled(fixedRate = 60000) // 每分钟检查一次
    public void performHealthCheck() {
        try {
            boolean isHealthy = checkDataSourceHealth();
            
            if (!isHealthy) {
                logger.error("DataSource health check failed");
                // 执行恢复操作
                recoverDataSource();
            } else {
                logger.info("DataSource is healthy");
            }
            
        } catch (Exception e) {
            logger.error("Health check failed", e);
        }
    }
    
    private boolean checkDataSourceHealth() {
        try (Connection connection = dataSource.getConnection()) {
            // 执行简单的健康检查查询
            PreparedStatement ps = connection.prepareStatement("SELECT 1");
            ResultSet rs = ps.executeQuery();
            return rs.next();
        } catch (SQLException e) {
            logger.error("DataSource health check failed", e);
            return false;
        }
    }
    
    private void recoverDataSource() {
        try {
            if (dataSource instanceof HikariDataSource) {
                HikariDataSource hikariDS = (HikariDataSource) dataSource;
                hikariDS.close();
                
                // 重新初始化
                HikariConfig config = new HikariConfig();
                // 重新加载配置...
                HikariDataSource newDataSource = new HikariDataSource(config);
                
                // 替换数据源(需要确保线程安全)
                replaceDataSource(newDataSource);
            }
        } catch (Exception e) {
相关推荐
广告位招租

相似文章

    评论 (0)

    0/2000