在企业级应用开发中,合理规范的日志记录是系统稳定运行、问题排查和性能优化的关键保障。
SpringBoot作为流行的Java开发框架,提供了强大而灵活的日志支持,但如何建立统一、高效的日志输出规范却是许多团队面临的挑战。
本文将介绍SpringBoot中5种日志输出规范策略。
统一的日志格式是团队协作的基础,可以提高日志的可读性和可分析性。
SpringBoot允许开发者自定义日志输出格式,包括时间戳、日志级别、线程信息、类名和消息内容等。
在application.properties
或application.yml
中定义日志格式:
# application.properties
# 控制台日志格式
logging.pattern.console=%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
# 文件日志格式
logging.pattern.file=%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
YAML格式配置:
logging:
pattern:
console: "%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"
file: "%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"
对于更复杂的配置,可以使用logback-spring.xml
:
${CONSOLE_LOG_PATTERN}
UTF-8
logs/application.log
${FILE_LOG_PATTERN}
UTF-8
logs/archived/application.%d{yyyy-MM-dd}.%i.log
10MB
30
3GB
对于需要集中式日志分析的系统,配置JSON格式日志更有利于日志处理:
net.logstash.logback
logstash-logback-encoder
7.2
logs/application.json
requestId
userId
{"application":"my-service","environment":"${ENVIRONMENT:-development}"}
logs/archived/application.%d{yyyy-MM-dd}.%i.json
10MB
30
3GB
%d{HH:mm:ss.SSS} %highlight(%-5level) %cyan(%logger{15}) - %msg%n
%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{requestId}] [%X{userId}] %-5level [%thread] %logger{36} - %msg%n
合理使用日志级别可以帮助区分不同重要程度的信息,便于问题定位和系统监控。
SpringBoot支持标准的日志级别:TRACE、DEBUG、INFO、WARN、ERROR。
# 全局日志级别
logging.level.root=INFO
# 特定包的日志级别
logging.level.org.springframework.web=DEBUG
logging.level.org.hibernate=ERROR
logging.level.com.mycompany.app=DEBUG
# application.yml
spring:
profiles:
active: dev
---
spring:
config:
activate:
on-profile: dev
logging:
level:
root: INFO
com.mycompany.app: DEBUG
org.springframework: INFO
---
spring:
config:
activate:
on-profile: prod
logging:
level:
root: WARN
com.mycompany.app: INFO
org.springframework: WARN
@RestController
@RequestMapping("/api/logs")
public class LoggingController {
@Autowired
private LoggingSystem loggingSystem;
@PutMapping("/level/{package}/{level}")
public void changeLogLevel(
@PathVariable("package") String packageName,
@PathVariable("level") String level) {
LogLevel logLevel = LogLevel.valueOf(level.toUpperCase());
loggingSystem.setLogLevel(packageName, logLevel);
}
}
建立清晰的日志级别使用规范对团队协作至关重要:
try {
// 业务操作
} catch (Exception e) {
log.error("Failed to process payment for order: {}", orderId, e);
throw new PaymentProcessingException("Payment processing failed", e);
}
if (retryCount > maxRetries / 2) {
log.warn("High number of retries detected for operation: {}, current retry: {}/{}",
operationType, retryCount, maxRetries);
}
log.info("Order {} has been successfully processed with {} items",
order.getId(), order.getItems().size());
log.debug("Processing product with ID: {}, name: {}, category: {}",
product.getId(), product.getName(), product.getCategory());
log.trace("Method execution path: class={}, method={}, params={}",
className, methodName, Arrays.toString(args));
// 推荐方式
if (log.isDebugEnabled()) {
log.debug("Complex calculation result: {}", calculateComplexResult());
}
// 避免这样使用
log.debug("Complex calculation result: " + calculateComplexResult());
使用AOP(面向切面编程)可以集中处理日志记录,避免在每个方法中手动编写重复的日志代码。尤其适合API调用日志、方法执行时间统计等场景。
@Aspect
@Component
@Slf4j
public class LoggingAspect {
@Pointcut("execution(* com.mycompany.app.service.*.*(..))")
public void serviceLayer() {}
@Around("serviceLayer()")
public Object logMethodExecution(ProceedingJoinPoint joinPoint) throws Throwable {
String className = joinPoint.getSignature().getDeclaringTypeName();
String methodName = joinPoint.getSignature().getName();
log.info("Executing: {}.{}", className, methodName);
long startTime = System.currentTimeMillis();
try {
Object result = joinPoint.proceed();
long executionTime = System.currentTimeMillis() - startTime;
log.info("Executed: {}.{} in {} ms", className, methodName, executionTime);
return result;
} catch (Exception e) {
log.error("Exception in {}.{}: {}", className, methodName, e.getMessage(), e);
throw e;
}
}
}
@Aspect
@Component
@Slf4j
public class ApiLoggingAspect {
@Pointcut("@annotation(org.springframework.web.bind.annotation.RequestMapping) || " +
"@annotation(org.springframework.web.bind.annotation.GetMapping) || " +
"@annotation(org.springframework.web.bind.annotation.PostMapping) || " +
"@annotation(org.springframework.web.bind.annotation.PutMapping) || " +
"@annotation(org.springframework.web.bind.annotation.DeleteMapping)")
public void apiMethods() {}
@Around("apiMethods()")
public Object logApiCall(ProceedingJoinPoint joinPoint) throws Throwable {
HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder
.currentRequestAttributes()).getRequest();
String requestURI = request.getRequestURI();
String httpMethod = request.getMethod();
String clientIP = request.getRemoteAddr();
log.info("API Request - Method: {} URI: {} Client: {}", httpMethod, requestURI, clientIP);
long startTime = System.currentTimeMillis();
try {
Object result = joinPoint.proceed();
long duration = System.currentTimeMillis() - startTime;
log.info("API Response - Method: {} URI: {} Duration: {} ms Status: SUCCESS",
httpMethod, requestURI, duration);
return result;
} catch (Exception e) {
long duration = System.currentTimeMillis() - startTime;
log.error("API Response - Method: {} URI: {} Duration: {} ms Status: ERROR Message: {}",
httpMethod, requestURI, duration, e.getMessage(), e);
throw e;
}
}
}
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
public @interface LogExecutionTime {
String description() default "";
}
@Aspect
@Component
@Slf4j
public class CustomLogAspect {
@Around("@annotation(logExecutionTime)")
public Object logExecutionTime(ProceedingJoinPoint joinPoint, LogExecutionTime logExecutionTime) throws Throwable {
String description = logExecutionTime.description();
String methodName = joinPoint.getSignature().getName();
log.info("Starting {} - {}", methodName, description);
long startTime = System.currentTimeMillis();
try {
Object result = joinPoint.proceed();
long executionTime = System.currentTimeMillis() - startTime;
log.info("Completed {} - {} in {} ms", methodName, description, executionTime);
return result;
} catch (Exception e) {
long executionTime = System.currentTimeMillis() - startTime;
log.error("Failed {} - {} after {} ms: {}", methodName, description,
executionTime, e.getMessage(), e);
throw e;
}
}
}
使用示例:
@Service
public class OrderService {
@LogExecutionTime(description = "Process order payment")
public PaymentResult processPayment(Order order) {
// 处理支付逻辑
}
}
// 敏感信息脱敏示例
private String maskCardNumber(String cardNumber) {
if (cardNumber == null || cardNumber.length() < 8) {
return "***";
}
return "******" + cardNumber.substring(cardNumber.length() - 4);
}
MDC (Mapped Diagnostic Context) 是一种用于存储请求级别上下文信息的工具,它可以在日志框架中保存和传递这些信息,特别适合分布式系统中的请求跟踪。
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class MdcLoggingFilter extends OncePerRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response,
FilterChain filterChain) throws ServletException, IOException {
try {
// 生成唯一请求ID
String requestId = UUID.randomUUID().toString().replace("-", "");
MDC.put("requestId", requestId);
// 添加用户信息(如果有)
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (authentication != null && authentication.isAuthenticated()) {
MDC.put("userId", authentication.getName());
}
// 添加请求信息
MDC.put("clientIP", request.getRemoteAddr());
MDC.put("userAgent", request.getHeader("User-Agent"));
MDC.put("httpMethod", request.getMethod());
MDC.put("requestURI", request.getRequestURI());
// 设置响应头,便于客户端跟踪
response.setHeader("X-Request-ID", requestId);
filterChain.doFilter(request, response);
} finally {
// 清理MDC上下文,防止内存泄漏
MDC.clear();
}
}
}
与Spring Cloud Sleuth和Zipkin集成,实现全链路追踪:
org.springframework.cloud
spring-cloud-starter-sleuth
org.springframework.cloud
spring-cloud-sleuth-zipkin
spring.application.name=my-service
spring.sleuth.sampler.probability=1.0
spring.zipkin.base-url=http://localhost:9411
@Service
public class BackgroundJobService {
private static final Logger log = LoggerFactory.getLogger(BackgroundJobService.class);
@Async
public CompletableFuture processJob(String jobId, Map context) {
// 保存原有MDC上下文
Map previousContext = MDC.getCopyOfContextMap();
try {
// 设置新的MDC上下文
MDC.put("jobId", jobId);
if (context != null) {
context.forEach(MDC::put);
}
log.info("Starting background job processing");
// 执行业务逻辑
// ...
log.info("Completed background job processing");
return CompletableFuture.completedFuture(null);
} finally {
// 恢复原有MDC上下文或清除
if (previousContext != null) {
MDC.setContextMap(previousContext);
} else {
MDC.clear();
}
}
}
}
// 自定义线程池配置,传递MDC上下文
@Configuration
public class AsyncConfig implements AsyncConfigurer {
@Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(25);
executor.setThreadNamePrefix("MyAsync-");
// 包装原始Executor,传递MDC上下文
executor.setTaskDecorator(runnable -> {
Map contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
runnable.run();
} finally {
MDC.clear();
}
};
});
executor.initialize();
return executor;
}
}
在高性能系统中,同步记录日志可能成为性能瓶颈,特别是在I/O性能受限的环境下。
异步日志通过将日志操作从主线程中分离,可以显著提升系统性能。
512
0
false
false
添加依赖:
org.springframework.boot
spring-boot-starter-log4j2
com.lmax
disruptor
3.4.4
配置Log4j2:
1024
针对Log4j2进行更高级的性能优化:
%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n
对于特殊需求,可以实现自定义的异步日志记录器:
@Component
public class AsyncLogger {
private static final Logger log = LoggerFactory.getLogger(AsyncLogger.class);
private final ExecutorService logExecutor;
public AsyncLogger() {
this.logExecutor = Executors.newSingleThreadExecutor(r -> {
Thread thread = new Thread(r, "async-logger");
thread.setDaemon(true);
return thread;
});
// 确保应用关闭时处理完所有日志
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
logExecutor.shutdown();
try {
if (!logExecutor.awaitTermination(5, TimeUnit.SECONDS)) {
log.warn("AsyncLogger executor did not terminate in the expected time.");
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}));
}
public void info(String format, Object... arguments) {
logExecutor.submit(() -> log.info(format, arguments));
}
public void warn(String format, Object... arguments) {
logExecutor.submit(() -> log.warn(format, arguments));
}
public void error(String format, Object... arguments) {
Throwable throwable = extractThrowable(arguments);
if (throwable != null) {
logExecutor.submit(() -> log.error(format, arguments));
} else {
logExecutor.submit(() -> log.error(format, arguments));
}
}
private Throwable extractThrowable(Object[] arguments) {
if (arguments != null && arguments.length > 0) {
Object lastArg = arguments[arguments.length - 1];
if (lastArg instanceof Throwable) {
return (Throwable) lastArg;
}
}
return null;
}
}
异步日志的注意事项:
合理使用同步与异步:
// 同步记录关键业务日志
log.info("Transaction completed: id={}, amount={}, status={}",
transaction.getId(), transaction.getAmount(), transaction.getStatus());
// 异步记录高频统计日志
asyncLogger.info("API usage stats: endpoint={}, count={}, avgResponseTime={}ms",
endpoint, requestCount, avgResponseTime);
另外,性能要求较高的应用推荐使用log4j2的异步模式,性能远高于logback。
这些策略不是相互排斥的,而是可以结合使用,共同构建完整的日志体系。
在实际应用中,应根据项目规模、团队情况和业务需求,选择合适的日志规范策略组合。
好的日志实践不仅能帮助开发者更快地定位和解决问题,还能为系统性能优化和安全审计提供重要依据。