WL
Java Full Stack Developer
Wassim Lagnaoui
Back to Blog

Spring Boot Concurrency Essentials

A beginner-friendly guide to how Spring Boot runs work in parallel: what async methods and schedulers are, why thread pools matter, and how web stacks, transactions, and Reactor fit in.

WL
Wassim Lagnaoui
Java Full Stack Developer
Spring Boot Concurrency

Introduction: what is concurrency?

Concurrency is doing more than one thing at the same time by sharing time across threads. In a web app, it means one user’s slow task shouldn’t block everyone else. Threads, pools, and schedulers let work run in parallel or in the background so requests stay responsive.

This article focuses on how Spring Boot uses those building blocks: when @Async makes sense, how to size thread pools, what schedulers do, and why transactions don’t hop between threads. We keep the language simple and the examples practical.

@Async and async configuration

Async methods allow work to run in background threads instead of blocking the caller. Enabling async support wires Spring to intercept @Async methods and hand them to a thread pool.

In plain words: when a task takes time (like calling another service or sending email), marking a method with @Async lets your app keep responding while that task continues in the background.

  • Enabling: @EnableAsync on a configuration class turns on async method execution.
  • Annotating: @Async on public methods submits work to an executor; return types can be void, Future, or CompletableFuture.
  • Where it runs: By default, a SimpleAsyncTaskExecutor is used; for control, define a TaskExecutor bean and reference it.
  • Beginner note: @Async calls only take effect when invoked through a Spring proxy (i.e., from another bean, not this.method()).
@Configuration
@EnableAsync
public class AsyncConfig implements AsyncConfigurer {
  @Bean(name = "ioExecutor")
  public Executor ioExecutor() {
    ThreadPoolTaskExecutor ex = new ThreadPoolTaskExecutor();
    ex.setThreadNamePrefix("io-");
    ex.setCorePoolSize(8);
    ex.setMaxPoolSize(32);
    ex.setQueueCapacity(200);
    ex.initialize();
    return ex;
  }

  @Override
  public Executor getAsyncExecutor() { return ioExecutor(); }
}

@Service
public class ReportService {
  @Async("ioExecutor")
  public CompletableFuture<Report> generateReport(Long id) {
    // expensive IO work here
    return CompletableFuture.completedFuture(new Report(id));
  }
}

TaskExecutor abstractions

Spring’s TaskExecutor abstracts Java’s Executor/ExecutorService, letting beans submit tasks without depending on concrete pool types.

Think of a TaskExecutor as a pool of workers. You hand it jobs, it finds a free worker thread to do them. This keeps your code simple and your app stable.

  • Common choices: ThreadPoolTaskExecutor (most apps), SimpleAsyncTaskExecutor (unbounded threads; demo/dev), ConcurrentTaskExecutor (wrap an existing ExecutorService).
  • Why it matters: You can size pools, name threads, and monitor queues—key for stability and debugging.
  • How to inject: Define as a bean and inject by type or qualifier.
@Service
class ImageService {
  private final TaskExecutor thumbnailsExecutor;
  ImageService(@Qualifier("ioExecutor") TaskExecutor ex) { this.thumbnailsExecutor = ex; }

  public void generateThumbnails(List<URI> images) {
    for (URI uri : images) {
      thumbnailsExecutor.execute(() -> {/* process */});
    }
  }
}

Scheduling with @Scheduled

Scheduled tasks run automatically on a timer. Spring drives them with a TaskScheduler backed by a scheduled executor.

In simple terms: you can tell Spring “run this method every 5 seconds” or “every night at 2 AM” to do cleanups, reports, or syncs.

  • Fixed rate: Starts tasks on a steady cadence, regardless of how long the last run took.
  • Fixed delay: Waits a delay after a run completes before starting the next.
  • Cron: Uses a cron expression for calendar-like schedules.
@Configuration
@EnableScheduling
class SchedulingConfig {
  @Bean
  TaskScheduler scheduler() {
    ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
    ts.setPoolSize(4);
    ts.setThreadNamePrefix("sched-");
    return ts;
  }
}

@Service
class CleanupService {
  @Scheduled(fixedRate = 5_000)
  void heartbeat() { /* runs every 5s */ }

  @Scheduled(fixedDelay = 10_000, initialDelay = 2_000)
  void cleanup() { /* runs 10s after previous completion */ }

  @Scheduled(cron = "0 0 2 * * *")
  void nightly() { /* runs at 02:00 daily */ }
}

Thread pool configuration

Thread pools control parallelism and protect the app from overload. Right-sizing avoids both starvation and resource exhaustion.

Why it matters: too few threads can make users wait; too many threads can overwhelm the CPU and database. A balanced pool keeps your app responsive.

  • Key knobs: corePoolSize, maxPoolSize, queueCapacity, and keepAliveSeconds.
  • Blocking tasks: IO-heavy work usually needs more threads than CPU-heavy work; dedicate executors by workload type.
  • Defaults: The default async executor may spawn unbounded threads; configuring a bounded pool is safer.
@Bean(name = "cpuExecutor")
Executor cpuExecutor() {
  ThreadPoolTaskExecutor ex = new ThreadPoolTaskExecutor();
  ex.setThreadNamePrefix("cpu-");
  ex.setCorePoolSize(Runtime.getRuntime().availableProcessors());
  ex.setMaxPoolSize(Runtime.getRuntime().availableProcessors()*2);
  ex.setQueueCapacity(1000);
  ex.initialize();
  return ex;
}

Spring bean thread-safety basics

Singleton beans are shared across threads. Keeping them stateless (no mutable fields) avoids race conditions. When state is needed, use safe scopes or synchronization.

Beginner view: pretend many users call the same bean at once. If it stores changing data in fields, calls can step on each other. Passing data via method args is safer.

  • Default scope: singleton means one instance per application context.
  • Stateless is simpler: Prefer passing data via method parameters; avoid mutable shared fields.
  • Other scopes: prototype, request, session can hold per-use or per-request state when necessary.

@Transactional and async execution

Transactions are bound to the calling thread. Async methods run on another thread, so they do not automatically join the caller’s transaction.

In practice: saving an order and kicking off an async payment means the payment runs in its own transaction. If the payment fails later, the order save won’t roll back automatically.

  • Boundary: @Transactional on a caller does not propagate into an @Async method.
  • Options: Start a new transaction inside the async method, or move the work to synchronous code if atomicity is required.
  • Beginner note: Logging MDC, security context, and locale also do not flow unless explicitly propagated.
@Service
class OrderService {
  private final PaymentService payments;
  OrderService(PaymentService p){this.payments=p;}

  @Transactional
  public void placeOrder(Order o){
    // save order within TX
    // fire-and-forget async charge (new thread, separate TX)
    payments.chargeAsync(o.getId());
  }
}

@Service
class PaymentService {
  @Async
  @Transactional // starts its own TX
  public void chargeAsync(Long orderId){
    // do charge, record payment
  }
}

CompletableFuture with Spring

CompletableFuture supports composing async tasks without blocking. Pipelines apply transformations as results arrive.

Plain explanation: think of promises that complete later. You can say “when A and B are both done, combine them” without waiting and freezing a thread.

  • Composition: thenApply/thenCompose transform or chain tasks; allOf/anyOf combine multiple futures.
  • Executors: Use thenApplyAsync(..., executor) to control which pool runs stages.
  • Non-blocking: Prefer callbacks over get()/join() to avoid tying up threads.
@Service
class CatalogService {
  private final Executor ioExecutor;
  CatalogService(@Qualifier("ioExecutor") Executor ex){this.ioExecutor=ex;}

  public CompletableFuture<ProductView> loadView(long id){
    CompletableFuture<Product> p = CompletableFuture.supplyAsync(() -> findProduct(id), ioExecutor);
    CompletableFuture<Price> price = CompletableFuture.supplyAsync(() -> findPrice(id), ioExecutor);
    return CompletableFuture.allOf(p, price)
      .thenApply(v -> new ProductView(p.join(), price.join()));
  }
}

Web requests and concurrency models

Spring MVC and Spring WebFlux handle concurrency differently, which affects scalability and coding style.

Simple idea: MVC gives each request a worker thread (great for simple, blocking code). WebFlux uses a small set of event-loop threads and non-blocking calls (great for many concurrent IO calls).

  • Spring MVC: Servlet-based, thread-per-request. Handlers run on a worker thread; blocking IO ties up that thread.
  • Spring WebFlux: Reactive and event-loop based (Netty/servlet non-blocking). Handlers return Mono/Flux; non-blocking IO scales with fewer threads.
  • Choosing: MVC fits traditional blocking stacks; WebFlux shines for many concurrent IO-bound calls.
// MVC controller (blocking)
@RestController
class MvcController {
  @GetMapping("/hello")
  String hello() { return "hello"; }
}

// WebFlux handler (non-blocking)
@RestController
class FluxController {
  @GetMapping("/greet")
  Mono<String> greet() { return Mono.just("hello"); }
}

Synchronization and shared resources

Shared state across threads needs coordination. Without it, race conditions corrupt data or cause intermittent bugs.

Everyday example: two threads increment the same counter at the same time and one update gets lost. A lock makes them take turns so counts are correct.

  • In-memory locks: ReentrantLock or synchronized blocks protect critical sections within a single JVM.
  • Database locks: Row/table locks (e.g., SELECT ... FOR UPDATE) coordinate across app instances.
  • Distributed locks: Redis/ZooKeeper/Consul based locks coordinate work in a cluster.
  • Schedulers: When @Scheduled runs on many nodes, use leader election or distributed locks to avoid duplicate work.
@Service
class CounterService {
  private final ReentrantLock lock = new ReentrantLock();
  private long value;
  public void inc(){
    lock.lock();
    try { value++; } finally { lock.unlock(); }
  }
}

Using Java concurrency utilities in Spring

Spring manages the lifecycle of executors and queues just like any other bean, which keeps threads from leaking and eases shutdown.

Why use Spring-managed pools: when the app starts, the pool starts; when the app stops, the pool stops cleanly. You avoid orphan threads.

  • ExecutorService: Define as a bean and shut down on context close.
  • ForkJoinPool: Useful for divide-and-conquer tasks; prefer a custom pool over the global common pool.
  • Queues: BlockingQueue can buffer work between producers and consumers.
@Configuration
class PoolConfig {
  @Bean(destroyMethod = "shutdown")
  ExecutorService ioPool(){ return Executors.newFixedThreadPool(16, r -> new Thread(r, "io-")); }
}

@Service
class ImportService {
  private final ExecutorService ioPool;
  ImportService(ExecutorService ioPool){ this.ioPool = ioPool; }
  public Future<?> submitImport(Runnable task){ return ioPool.submit(task); }
}

Error handling in async methods

Exceptions thrown on background threads don’t reach the caller automatically. Handling depends on the async style used.

Beginner takeaway: if you fire-and-forget a method, add a global handler to log failures; if you use futures, attach exceptionally to define fallbacks.

  • @Async void: Hook AsyncUncaughtExceptionHandler to log/notify.
  • Future/CompletableFuture: Propagate via get()/join() or handle with exceptionally/handle.
  • Global config: Implement AsyncConfigurer to set executor and exception handler in one place.
@Configuration
@EnableAsync
class AsyncErrorConfig implements AsyncConfigurer {
  @Override
  public Executor getAsyncExecutor() { return new SimpleAsyncTaskExecutor("async-"); }

  @Override
  public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
    return (ex, method, params) -> {
      // log, metrics, alerting
    };
  }
}

// CompletableFuture handling
service.generateReport(42L)
  .exceptionally(ex -> { /* fallback */ return Report.empty(); });

Reactive concurrency with Project Reactor (optional)

Reactor offers non-blocking types—Mono and Flux—and schedulers to control where work runs. It enables high concurrency with fewer threads.

If you’re new: this is a different style where you return a pipeline (not a value). It shines for apps that call many slow services and need to scale with minimal threads.

  • Types: Mono<T> for 0..1 values, Flux<T> for 0..N.
  • Schedulers: boundedElastic for blocking IO, parallel for CPU work, immediate for current thread.
  • Backpressure: Operators like onBackpressureBuffer handle fast producers and slow consumers.
Mono.fromCallable(() -> blockingCall())
  .subscribeOn(Schedulers.boundedElastic())
  .map(this::transform)
  .publishOn(Schedulers.parallel())
  .subscribe(result -> {/* use result */});