WL
Java Full Stack Developer
Wassim Lagnaoui

Lesson 17: Spring Boot Caching

Master Spring Boot caching strategies to build high-performance applications that scale efficiently through intelligent data storage and retrieval optimization.

Introduction

In modern web applications, performance is paramount, and caching is one of the most effective techniques for achieving optimal response times and scalability. Spring Boot's caching abstraction provides a powerful, flexible framework for implementing intelligent caching strategies that can transform your application's performance characteristics. Think of caching as your application's short-term memory - it remembers frequently accessed information so it doesn't have to perform expensive operations repeatedly. Just as you might keep your most-used tools within arm's reach on your workbench, caching keeps frequently requested data in fast-access storage, dramatically reducing response times and server load. This lesson explores Spring Boot's comprehensive caching capabilities, from simple in-memory caching to sophisticated distributed cache strategies that can handle enterprise-scale applications. You'll learn to implement cache annotations, configure different cache providers, design effective cache strategies, and optimize cache performance for production environments.


Understanding Caching

Definition

  • Caching is a performance optimization technique that stores copies of frequently accessed data in fast-access storage to avoid repeated expensive operations.
  • When data is requested, the cache is checked first; if found (cache hit), the cached data is returned immediately.
  • If not found (cache miss), the expensive operation is performed, and the result is stored in the cache for future requests.
  • Effective caching can reduce database load, improve response times, and enhance overall application scalability by orders of magnitude.

Analogy

Think of caching like the strategy a busy chef uses in a restaurant kitchen. Instead of going to the main storage room every time they need common ingredients like salt, pepper, or olive oil, the chef keeps frequently used items on the counter within arm's reach. When an order comes in for a dish that needs salt, the chef grabs it from the counter (cache hit) rather than walking to the storage room (database query). This saves time and effort for every dish prepared. Occasionally, an ingredient runs out and the chef must restock from the main storage (cache miss), but this happens much less frequently than if they fetched every ingredient from storage for every dish. The result is faster service and less wear on the chef (server).

Examples

Without caching - repeated expensive operations:

@Service
public class ProductService {
    public Product getProduct(Long id) {
        // Expensive database query every time
        return productRepository.findById(id);
    }

    public List getPopularProducts() {
        // Complex aggregation query every time
        return productRepository.findTop10ByOrderBySalesDesc();
    }
}

With caching - store results for fast retrieval:

@Service
public class ProductService {
    @Cacheable("products")
    public Product getProduct(Long id) {
        // Query once, cache result for subsequent calls
        return productRepository.findById(id);
    }

    @Cacheable("popular-products")
    public List getPopularProducts() {
        // Expensive operation cached for 1 hour
        return productRepository.findTop10ByOrderBySalesDesc();
    }
}

Performance impact comparison:

// Without cache:
// - Database query: 200ms per request
// - 1000 requests = 200 seconds total

// With cache:
// - First request: 200ms (cache miss)
// - Next 999 requests: 2ms each (cache hits)
// - Total: 200ms + (999 × 2ms) = 2.2 seconds
// 90x performance improvement!

Spring Cache Abstraction

Definition

  • Spring's cache abstraction provides a unified programming model for caching that works with different cache providers without changing your code.
  • The abstraction uses annotations to declare caching behavior, automatically handling cache operations like storing, retrieving, and evicting data.
  • This means you can switch between cache providers (from simple HashMap to Redis clusters) by changing configuration, not code, providing flexibility and vendor independence.

Examples

Enable caching in Spring Boot:

@SpringBootApplication
@EnableCaching  // Enables Spring's caching support
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

Basic cache configuration:

@Configuration
@EnableCaching
public class CacheConfig {

    @Bean
    public CacheManager cacheManager() {
        // Simple in-memory cache manager
        SimpleCacheManager manager = new SimpleCacheManager();
        manager.setCaches(Arrays.asList(
            new ConcurrentMapCache("products"),
            new ConcurrentMapCache("users"),
            new ConcurrentMapCache("categories")
        ));
        return manager;
    }
}

Cache manager with multiple caches:

@Bean
public CacheManager cacheManager() {
    CaffeineCacheManager manager = new CaffeineCacheManager();
    manager.setCacheNames("products", "users", "categories", "search-results");
    manager.setCaffeine(Caffeine.newBuilder()
        .maximumSize(1000)
        .expireAfterWrite(10, TimeUnit.MINUTES));
    return manager;
}

Cache Annotations

Definition

  • Spring provides several cache annotations that declaratively define caching behavior without cluttering business logic with cache management code.
  • @Cacheable caches method results, @CacheEvict removes cached data, @CachePut updates cache entries, and @Caching combines multiple cache operations.
  • These annotations use SpEL (Spring Expression Language) for dynamic cache keys and conditions, providing fine-grained control over when and how caching occurs.

Examples

@Cacheable - cache method results:

@Service
public class ProductService {

    @Cacheable(value = "products", key = "#id")
    public Product findProduct(Long id) {
        System.out.println("Fetching product from database: " + id);
        return productRepository.findById(id);
    }

    @Cacheable(value = "products", key = "#name", condition = "#name.length() > 2")
    public Product findByName(String name) {
        return productRepository.findByName(name);
    }
}

@CacheEvict - remove cached data:

@CacheEvict(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
    Product updated = productRepository.save(product);
    System.out.println("Cache evicted for product: " + product.getId());
    return updated;
}

@CacheEvict(value = "products", allEntries = true)
public void clearAllProducts() {
    System.out.println("All product cache entries cleared");
}

@CachePut - update cache:

@CachePut(value = "products", key = "#result.id")
public Product createProduct(Product product) {
    Product saved = productRepository.save(product);
    System.out.println("Product saved and cached: " + saved.getId());
    return saved;
}

@Caching - combine multiple cache operations:

@Caching(
    evict = {
        @CacheEvict(value = "products", key = "#product.id"),
        @CacheEvict(value = "categories", key = "#product.categoryId")
    }
)
public void deleteProduct(Product product) {
    productRepository.delete(product);
}

Complex cache key expressions:

@Cacheable(value = "search-results",
           key = "#category + '_' + #minPrice + '_' + #maxPrice",
           unless = "#result.isEmpty()")
public List searchProducts(String category, BigDecimal minPrice, BigDecimal maxPrice) {
    return productRepository.findByCategoryAndPriceBetween(category, minPrice, maxPrice);
}

Cache Providers

Definition

  • Cache providers are the underlying storage mechanisms that actually hold cached data.
  • Spring Boot supports various providers: ConcurrentHashMap for simple in-memory caching, Caffeine for high-performance local caching with advanced features like size-based eviction and time-based expiration, Redis for distributed caching across multiple servers, and Hazelcast for in-memory data grids.
  • Each provider has different characteristics regarding performance, memory management, persistence, and distribution capabilities.

Examples

Caffeine high-performance cache:

<!-- Add Caffeine dependency -->
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
</dependency>
@Bean
public CacheManager caffeineCacheManager() {
    CaffeineCacheManager manager = new CaffeineCacheManager();
    manager.setCaffeine(Caffeine.newBuilder()
        .maximumSize(1000)                    // Max 1000 entries
        .expireAfterWrite(10, TimeUnit.MINUTES) // Expire after 10 minutes
        .expireAfterAccess(5, TimeUnit.MINUTES) // Expire if not accessed for 5 minutes
        .recordStats());                      // Enable statistics
    return manager;
}

Redis distributed cache:

<!-- Add Redis dependency -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
@Bean
public CacheManager redisCacheManager(RedisConnectionFactory factory) {
    RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
        .entryTtl(Duration.ofMinutes(30))    // 30-minute TTL
        .serializeKeysWith(RedisSerializationContext.SerializationPair
            .fromSerializer(new StringRedisSerializer()))
        .serializeValuesWith(RedisSerializationContext.SerializationPair
            .fromSerializer(new GenericJackson2JsonRedisSerializer()));

    return RedisCacheManager.builder(factory)
        .cacheDefaults(config)
        .transactionAware()
        .build();
}

Multiple cache managers:

@Configuration
public class MultiCacheConfig {

    @Bean
    @Primary
    public CacheManager localCacheManager() {
        // Fast local cache for frequently accessed data
        return new CaffeineCacheManager("products", "users");
    }

    @Bean
    public CacheManager distributedCacheManager(RedisConnectionFactory factory) {
        // Distributed cache for shared data across instances
        return RedisCacheManager.builder(factory).build();
    }
}

Cache Configuration

Definition

  • Cache configuration defines how caches behave regarding size limits, expiration policies, eviction strategies, and serialization.
  • Configuration includes setting maximum cache sizes to prevent memory overflow, time-based expiration (TTL) to ensure data freshness, access-based expiration for unused data cleanup, and cache warming strategies for critical data.
  • Proper configuration ensures optimal memory usage while maintaining the right balance between performance and data freshness.

Examples

Environment-specific cache configuration:

# application-dev.properties - Development settings
spring.cache.caffeine.spec=maximumSize=100,expireAfterWrite=5m
logging.level.org.springframework.cache=DEBUG

# application-prod.properties - Production settings
spring.cache.caffeine.spec=maximumSize=10000,expireAfterWrite=30m
spring.cache.redis.time-to-live=1800000

Advanced cache configurations:

@Configuration
public class CacheConfig {

    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager manager = new CaffeineCacheManager();

        // Short-lived user sessions cache
        manager.registerCustomCache("user-sessions",
            Caffeine.newBuilder()
                .expireAfterAccess(30, TimeUnit.MINUTES)
                .maximumSize(5000)
                .build());

        // Long-lived reference data cache
        manager.registerCustomCache("reference-data",
            Caffeine.newBuilder()
                .expireAfterWrite(4, TimeUnit.HOURS)
                .maximumSize(1000)
                .build());

        // Frequently accessed products with refresh ahead
        manager.registerCustomCache("hot-products",
            Caffeine.newBuilder()
                .expireAfterWrite(1, TimeUnit.HOURS)
                .refreshAfterWrite(45, TimeUnit.MINUTES)
                .maximumSize(2000)
                .build());

        return manager;
    }
}

Cache with custom serialization:

@Bean
public RedisCacheManager redisCacheManager(RedisConnectionFactory factory) {
    // Configure JSON serialization for complex objects
    RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
        .entryTtl(Duration.ofHours(1))
        .serializeValuesWith(RedisSerializationContext.SerializationPair
            .fromSerializer(new GenericJackson2JsonRedisSerializer()));

    // Different TTL for different caches
    Map cacheConfigs = new HashMap<>();
    cacheConfigs.put("products", config.entryTtl(Duration.ofMinutes(30)));
    cacheConfigs.put("users", config.entryTtl(Duration.ofHours(2)));
    cacheConfigs.put("reference-data", config.entryTtl(Duration.ofDays(1)));

    return RedisCacheManager.builder(factory)
        .cacheDefaults(config)
        .withInitialCacheConfigurations(cacheConfigs)
        .build();
}

Cache Strategies

Definition

  • Cache strategies define when and how data is loaded into and removed from the cache.
  • Common strategies include Cache-Aside (application manages cache), Write-Through (write to cache and database simultaneously), Write-Behind (write to cache immediately, database later), and Read-Through (cache loads data automatically on miss).
  • Each strategy has trade-offs between consistency, performance, and complexity, making strategy selection crucial for application requirements.

Examples

Cache-Aside pattern (most common):

@Service
public class ProductService {

    @Cacheable("products")
    public Product getProduct(Long id) {
        // Cache-aside: check cache first, load from DB on miss
        return productRepository.findById(id);
    }

    @CacheEvict(value = "products", key = "#product.id")
    public Product updateProduct(Product product) {
        // Update database and remove from cache
        return productRepository.save(product);
    }
}

Write-Through pattern:

@Service
public class UserService {

    @CachePut(value = "users", key = "#user.id")
    public User saveUser(User user) {
        // Write-through: save to database and update cache
        User saved = userRepository.save(user);
        logger.info("User saved to database and cache: {}", saved.getId());
        return saved;
    }
}

Cache warming strategy:

@Component
public class CacheWarmupService {

    @Autowired
    private ProductService productService;

    @EventListener(ApplicationReadyEvent.class)
    public void warmupCache() {
        logger.info("Starting cache warmup...");

        // Warm up with popular products
        List popularProductIds = productRepository.findPopularProductIds();
        popularProductIds.forEach(id -> {
            try {
                productService.getProduct(id); // This will populate the cache
            } catch (Exception e) {
                logger.warn("Failed to warm cache for product: {}", id, e);
            }
        });

        logger.info("Cache warmup completed for {} products", popularProductIds.size());
    }
}

Multi-level caching strategy:

@Service
public class MultiLevelCacheService {

    @Cacheable(value = "l1-cache", key = "#id") // Level 1: Fast local cache
    public Product getProductL1(Long id) {
        return getProductL2(id);
    }

    @Cacheable(value = "l2-cache", key = "#id") // Level 2: Distributed cache
    public Product getProductL2(Long id) {
        logger.info("Loading from database: {}", id);
        return productRepository.findById(id);
    }

    @CacheEvict(value = {"l1-cache", "l2-cache"}, key = "#id")
    public void evictProduct(Long id) {
        logger.info("Evicted product from all cache levels: {}", id);
    }
}

Distributed Caching

Definition

  • Distributed caching extends caching across multiple application instances or servers, providing shared cache storage that all instances can access.
  • This enables horizontal scaling while maintaining cache consistency and eliminates cache warm-up requirements for new instances.
  • Distributed caching is essential for microservices architectures and applications running on multiple servers or containers.

Examples

Redis cluster configuration:

# application.properties
spring.redis.cluster.nodes=redis1:6379,redis2:6379,redis3:6379
spring.redis.cluster.max-redirects=3
spring.redis.timeout=2000ms
spring.redis.lettuce.pool.max-active=8
spring.redis.lettuce.pool.max-idle=8

Distributed cache with fallback:

@Service
public class RobustCacheService {

    @Autowired
    private CacheManager distributedCacheManager;

    @Autowired
    private CacheManager localCacheManager;

    public Product getProduct(Long id) {
        try {
            // Try distributed cache first
            Cache distributedCache = distributedCacheManager.getCache("products");
            Product product = distributedCache.get(id, Product.class);

            if (product != null) {
                return product;
            }
        } catch (Exception e) {
            logger.warn("Distributed cache unavailable, falling back to local cache", e);
        }

        // Fallback to local cache and database
        return getFromLocalCacheOrDatabase(id);
    }

    private Product getFromLocalCacheOrDatabase(Long id) {
        Cache localCache = localCacheManager.getCache("products");
        return localCache.get(id, () -> productRepository.findById(id));
    }
}

Cache synchronization across instances:

@Component
public class CacheSyncService {

    @EventListener
    public void handleProductUpdate(ProductUpdateEvent event) {
        // Notify other instances to evict cache
        messagingService.broadcast("cache.evict.product", event.getProductId());
    }

    @RabbitListener(queues = "cache.evict.product")
    public void handleCacheEviction(Long productId) {
        cacheManager.getCache("products").evict(productId);
        logger.info("Evicted product {} from local cache due to remote update", productId);
    }
}

Cache Best Practices

Definition

  • Cache best practices ensure optimal performance, memory usage, and data consistency while avoiding common pitfalls.
  • These include choosing appropriate cache keys, setting reasonable TTL values, implementing cache warming for critical data, monitoring cache performance, handling cache failures gracefully, and avoiding cache stampedes.
  • Following best practices prevents memory leaks, ensures cache effectiveness, and maintains application reliability.

Examples

Effective cache key design:

@Service
public class CacheKeyService {

    // Good: Specific, hierarchical keys
    @Cacheable(value = "products", key = "'product:' + #id + ':' + #locale")
    public Product getLocalizedProduct(Long id, String locale) {
        return productService.getProductWithLocalization(id, locale);
    }

    // Good: Include version for cache invalidation
    @Cacheable(value = "api-responses",
               key = "'v1:products:' + #category + ':page:' + #page")
    public ApiResponse getProductPage(String category, int page) {
        return apiService.getProducts(category, page);
    }

    // Good: Handle null parameters
    @Cacheable(value = "search-results",
               key = "'search:' + (#query ?: 'empty') + ':' + (#filters ?: 'none')")
    public SearchResults search(String query, Map filters) {
        return searchService.search(query, filters);
    }
}

Cache performance monitoring:

@Component
public class CachePerformanceMonitor {

    private final MeterRegistry meterRegistry;

    @EventListener
    public void onCacheHit(CacheHitEvent event) {
        meterRegistry.counter("cache.hits",
            "cache", event.getCacheName()).increment();
    }

    @EventListener
    public void onCacheMiss(CacheMissEvent event) {
        meterRegistry.counter("cache.misses",
            "cache", event.getCacheName()).increment();
    }

    @Scheduled(fixedRate = 60000) // Every minute
    public void logCacheStatistics() {
        cacheManager.getCacheNames().forEach(cacheName -> {
            Cache cache = cacheManager.getCache(cacheName);
            if (cache.getNativeCache() instanceof com.github.benmanes.caffeine.cache.Cache) {
                var stats = ((com.github.benmanes.caffeine.cache.Cache)
                    cache.getNativeCache()).stats();

                logger.info("Cache {}: hit rate={:.2f}%, size={}, evictions={}",
                    cacheName, stats.hitRate() * 100,
                    cache.getNativeCache().size(), stats.evictionCount());
            }
        });
    }
}

Graceful cache degradation:

@Service
public class FaultTolerantCacheService {

    @Retryable(value = CacheException.class, maxAttempts = 3)
    @Cacheable(value = "products", key = "#id")
    public Product getProduct(Long id) {
        return productRepository.findById(id);
    }

    @Recover
    public Product recoverFromCacheFailure(CacheException ex, Long id) {
        logger.warn("Cache operation failed, fetching directly from database: {}", id, ex);
        // Fetch directly without caching
        return productRepository.findById(id);
    }

    // Prevent cache stampede with synchronized cache loading
    @Cacheable(value = "expensive-computation", sync = true)
    public ExpensiveResult performExpensiveOperation(String input) {
        logger.info("Performing expensive operation for: {}", input);
        return computationService.compute(input);
    }
}

Cache configuration best practices:

@Configuration
public class OptimalCacheConfig {

    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager manager = new CaffeineCacheManager();

        // Configure based on data characteristics
        manager.registerCustomCache("user-profiles",
            Caffeine.newBuilder()
                .maximumSize(10000)           // Limit memory usage
                .expireAfterWrite(1, TimeUnit.HOURS)    // Data freshness
                .refreshAfterWrite(45, TimeUnit.MINUTES) // Proactive refresh
                .recordStats()                // Monitor performance
                .build());

        manager.registerCustomCache("session-data",
            Caffeine.newBuilder()
                .maximumSize(50000)
                .expireAfterAccess(30, TimeUnit.MINUTES) // Session timeout
                .recordStats()
                .build());

        return manager;
    }
}

Summary

You've now mastered Spring Boot's comprehensive caching capabilities, from basic cache annotations to sophisticated distributed caching strategies. You understand how caching transforms application performance by storing frequently accessed data in fast-access storage, reducing expensive operations and improving response times dramatically. You've learned to implement cache annotations for declarative caching behavior, configure different cache providers from simple in-memory solutions to enterprise-grade distributed systems, and design effective cache strategies that balance performance with data consistency. The concepts of cache keys, TTL configuration, eviction policies, and cache warming ensure your caching implementation is both performant and reliable. You've also explored distributed caching for scalable applications, multi-level caching strategies for optimal performance, and best practices for monitoring and maintaining cache health in production environments. These skills enable you to build applications that can handle high loads efficiently while providing excellent user experience through optimized data access patterns. Your next lesson will integrate caching with comprehensive monitoring and application performance optimization techniques.

Programming Challenge

Challenge: High-Performance E-commerce Caching System

Task: Build a Spring Boot e-commerce application with a sophisticated multi-level caching strategy that demonstrates various caching patterns, performance optimization, and intelligent cache management.

Requirements:

  1. Create core entities and services:
    • Product, Category, User, Order, Review entities
    • Repository layer with complex queries and aggregations
    • Service layer implementing business logic with multiple data access patterns
    • REST controllers for product catalog, user management, and order processing
  2. Implement multi-level caching strategy:
    • L1 Cache: Individual entities (products, users) with 1-hour TTL
    • L2 Cache: Collections and search results (categories, popular products) with 30-min TTL
    • L3 Cache: Aggregated data (statistics, reports) with 4-hour TTL
    • L4 Cache: Reference data (configurations, settings) with daily TTL
  3. Demonstrate various caching patterns:
    • Cache-aside pattern for product catalog
    • Write-through pattern for user profiles
    • Write-behind pattern for analytics data
    • Read-through pattern for computed recommendations
  4. Implement intelligent cache management:
    • Cache warming service that preloads popular products at startup
    • Event-driven cache invalidation when products are updated
    • Bulk cache operations for administrative tasks
    • Cache synchronization across multiple application instances
  5. Add performance optimization features:
    • Conditional caching based on product popularity and user behavior
    • Cache compression for large objects (product catalogs, order history)
    • Cache partitioning by user segments or geographic regions
    • Async cache refresh to prevent cache stampedes
  6. Create cache monitoring and management:
    • Cache statistics tracking (hit rates, eviction counts, memory usage)
    • Performance comparison endpoints (with/without cache)
    • Administrative endpoints for cache management and clearing
    • Cache health indicators and alerts for cache failures
  7. Implement distributed caching with Redis:
    • Configure Redis for session storage and shared cache data
    • Implement cache failover from Redis to local cache
    • Cache clustering and data partitioning strategies
    • Cross-instance cache invalidation messaging

Bonus features:

  • Machine learning-based cache warming using user behavior patterns
  • Geographic cache distribution with regional data centers
  • Cache versioning system for rolling updates without cache misses
  • A/B testing framework for different caching strategies
  • Real-time cache analytics dashboard with performance metrics
  • Intelligent cache eviction based on business value and access patterns

Learning Goals: Master comprehensive caching strategies, understand performance implications of different cache patterns, implement production-ready cache management, practice distributed caching concepts, and build systems that demonstrate measurable performance improvements through intelligent caching.