在高性能服务架构设计里,缓存是关键环节。常规做法是将热点数据存于 Redis/MemCache 等远程缓存,缓存未命中时再查数据库,以此提升访问速度、降低数据库压力 。
随着发展,架构有了改进,部分场景下单纯远程缓存不够,需结合本地缓存(如 Guava cache、Caffeine ),形成本地缓存(一级缓存) + 远程缓存(二级缓存)的两级缓存架构,进一步提升程序响应与服务性能,其基础访问流程如下(暂不考虑并发等复杂问题):
[此处插入原流程图片,替换路径为合理展示形式,如保留./assets/2.jpeg
,若为线上展示需处理为可访问链接]
相比单纯远程缓存,两级缓存有以下优势:
设计时要考虑诸多问题,核心是数据一致性:
此外,缓存过期时间、策略及多线程访问等问题也需关注,本文先聚焦两级缓存的代码实现。
整合Caffeine(最强本地缓存) 为一级缓存、Redis(性能优) 为二级缓存,搭建 SpringBoot 项目,引入依赖:
com.github.ben-manes.caffeine
caffeine
2.9.2
org.springframework.boot
spring-boot-starter-data-redis
org.springframework.boot
spring-boot-starter-cache
org.apache.commons
commons-pool2
2.8.1
配置RedisTemplate
,处理连接工厂、序列化等:
/**
* Redis缓存配置类
* @author ZhuZiKai
* @date 2022/3/31 0014
*/
@Configuration
@EnableCaching
@EnableAspectJAutoProxy
public class RedisConfig extends CachingConfigurerSupport {
@Bean
public FastJsonRedisSerializer fastJson2JsonRedisSerializer() {
return new FastJsonRedisSerializer(Object.class);
}
@Bean
public StringRedisSerializer StringRedisSerializer() {
return new StringRedisSerializer();
}
@Bean("redis")
@Primary
public RedisTemplate initRedisTemplate(RedisConnectionFactory redisConnectionFactory,
StringRedisSerializer stringRedisSerializer,
FastJsonRedisSerializer fastJson2JsonRedisSerializer) throws Exception {
RedisTemplate redisTemplate = new RedisTemplate();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setKeySerializer(stringRedisSerializer);
redisTemplate.setValueSerializer(stringRedisSerializer);
redisTemplate.setHashKeySerializer(stringRedisSerializer);
redisTemplate.setHashValueSerializer(fastJson2JsonRedisSerializer);
redisTemplate.setDefaultSerializer(stringRedisSerializer);
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
}
Spring 提供CacheManager
接口与注解(@Cacheable
/@CachePut
/@CacheEvict
),简化缓存操作。
@Cacheable
:按 key 查缓存,命中则直接返回;未命中则执行方法,结果入缓存。@CachePut
:无论缓存是否存在,执行方法并强制更新缓存。@CacheEvict
:执行方法后,移除对应缓存数据。CacheManager
@Configuration
public class CacheManagerConfig {
@Bean
public CacheManager cacheManager(){
CaffeineCacheManager cacheManager=new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS));
return cacheManager;
}
}
启动类添加@EnableCaching
,开启缓存支持。
@Cacheable
,保留业务与 Redis 操作逻辑,Caffeine 缓存交 Spring 管理:@Cacheable(value = "order",key = "#id")
public Order getOrderById(Long id) {
String key= CacheConstant.ORDER + id;
// 查Redis
Object obj = redisTemplate.opsForValue().get(key);
if (Objects.nonNull(obj)){
log.info("get data from redis");
return (Order) obj;
}
// Redis无则查DB
log.info("get data from database");
Order myOrder = orderMapper.selectOne(new LambdaQueryWrapper()
.eq(Order::getId, id));
// 结果入Redis
redisTemplate.opsForValue().set(key,myOrder,120, TimeUnit.SECONDS);
return myOrder;
}
@CachePut
,移除手动更新 Caffeine 操作(由 Spring 管理):@CachePut(cacheNames = "order",key = "#order.id")
public Order updateOrder(Order order) {
log.info("update order data");
orderMapper.updateById(order);
// 更新Redis
redisTemplate.opsForValue().set(CacheConstant.ORDER + order.getId(),
order, 120, TimeUnit.SECONDS);
return order;
}
注意:方法需定义返回值,否则可能缓存空对象,影响后续查询。
@CacheEvict
,仅处理 Redis 缓存删除:@CacheEvict(cacheNames = "order",key = "#id")
public void deleteOrder(Long id) {
log.info("delete order");
orderMapper.deleteById(id);
redisTemplate.delete(CacheConstant.ORDER + id);
}
此方案将本地缓存交 Spring 管理,Redis 缓存手动操作,降低业务入侵性。
进一步解耦,通过自定义注解与切面,实现对业务无入侵的缓存管理。
定义@DoubleCache
,支持缓存存取、删除,适配 SpringEL 表达式:
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface DoubleCache {
String cacheName();
String key(); // 支持SpringEL
long l2TimeOut() default 120;
CacheType type() default CacheType.FULL;
}
// 缓存操作类型枚举
public enum CacheType {
FULL, // 存取
PUT, // 只存
DELETE // 删除
}
解析注解中key
的表达式,适配方法参数:
public static String parse(String elString, TreeMap map){
elString=String.format("#{%s}",elString);
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = new StandardEvaluationContext();
map.entrySet().forEach(entry->
context.setVariable(entry.getKey(),entry.getValue())
);
Expression expression = parser.parseExpression(elString, new TemplateParserContext());
return expression.getValue(context, String.class);
}
@Configuration
public class CaffeineConfig {
@Bean
public Cache caffeineCache(){
return Caffeine.newBuilder()
.initialCapacity(128)// 初始容量
.maximumSize(1024)// 最大容量
.expireAfterWrite(60, TimeUnit.SECONDS)// 过期时间
.build();
}
}
通过切面拦截@DoubleCache
注解,统一处理缓存读写、删除:
@Slf4j
@Component
@Aspect
@AllArgsConstructor
public class CacheAspect {
private final Cache cache;
private final RedisTemplate redisTemplate;
@Pointcut("@annotation(com.cn.dc.annotation.DoubleCache)")
public void cacheAspect() {}
@Around("cacheAspect()")
public Object doAround(ProceedingJoinPoint point) throws Throwable {
MethodSignature signature = (MethodSignature) point.getSignature();
Method method = signature.getMethod();
// 解析方法参数,封装SpringEL上下文
String[] paramNames = signature.getParameterNames();
Object[] args = point.getArgs();
TreeMap treeMap = new TreeMap<>();
for (int i = 0; i < paramNames.length; i++) {
treeMap.put(paramNames[i],args[i]);
}
DoubleCache annotation = method.getAnnotation(DoubleCache.class);
String elResult = ElParser.parse(annotation.key(), treeMap);
String realKey = annotation.cacheName() + CacheConstant.COLON + elResult;
// 按操作类型处理
if (annotation.type()== CacheType.PUT){
Object object = point.proceed();
redisTemplate.opsForValue().set(realKey, object,annotation.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, object);
return object;
} else if (annotation.type()== CacheType.DELETE){
redisTemplate.delete(realKey);
cache.invalidate(realKey);
return point.proceed();
}
// 读写流程:先查Caffeine
Object caffeineCache = cache.getIfPresent(realKey);
if (Objects.nonNull(caffeineCache)) {
log.info("get data from caffeine");
return caffeineCache;
}
// 再查Redis
Object redisCache = redisTemplate.opsForValue().get(realKey);
if (Objects.nonNull(redisCache)) {
log.info("get data from redis");
cache.put(realKey, redisCache);
return redisCache;
}
// 都无则查数据库,结果入缓存
log.info("get data from database");
Object object = point.proceed();
if (Objects.nonNull(object)){
redisTemplate.opsForValue().set(realKey, object,annotation.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, object);
}
return object;
}
}
只需添加自定义注解,专注业务逻辑:
@DoubleCache(cacheName = "order", key = "#id",type = CacheType.FULL)
public Order getOrderById(Long id) {
return orderMapper.selectOne(new LambdaQueryWrapper().eq(Order::getId, id));
}
@DoubleCache(cacheName = "order",key = "#order.id",type = CacheType.PUT)
public Order updateOrder(Order order) {
orderMapper.updateById(order);
return order;
}
@DoubleCache(cacheName = "order",key = "#id",type = CacheType.DELETE)
public void deleteOrder(Long id) {
orderMapper.deleteById(id);
}
本文按业务入侵程度,介绍两种两级缓存实现:
实际项目中,是否用两级缓存需结合业务。若 Redis 满足需求,无需强制引入,因实际使用涉及并发、事务回滚、缓存策略适配等复杂问题。需权衡数据特性(如哪些适合一级 / 二级缓存)、一致性保障成本,再决定架构方案,让缓存真正成为服务性能的 “助推器” 而非 “负担” 。