Thread Pools in Java

A thread pool is a fixed set of worker threads that execute tasks from a shared queue, instead of creating a new thread per task. In real systems, thread pools control resource usage, limit context-switching overhead, and make latency and throughput predictable. This article explains the model, how to use Java's ExecutorService, and the trade-offs behind pool size and queue choice.

Overview

Creating a new thread for every task is expensive. The OS must allocate stack space, kernel structures, and thread-local storage. Destroying threads has similar cost. When the number of threads grows large, context switching between them degrades performance. A thread pool addresses this by:

  • Reusing threads: A fixed (or bounded) set of worker threads repeatedly takes tasks from a queue, executes them, and takes the next. No per-task thread creation or destruction.
  • Bounded concurrency: You control the maximum number of threads, avoiding resource exhaustion and excessive context switching.
  • Queueing: Tasks that cannot run immediately wait in a queue. You can choose bounded queues for backpressure or unbounded queues when you accept the risk of memory growth.

Java's ExecutorService (e.g. from Executors.newFixedThreadPool(n)) is the standard abstraction. You submit Runnable or Callable tasks, and the pool schedules them. You should shut down the pool when the application is done so threads can exit cleanly.

Example

Fixed-size pool with submission and shutdown

Java
ExecutorService pool = Executors.newFixedThreadPool(4);

for (int i = 0; i < 10; i++) {
    final int id = i;
    pool.submit(() -> {
        System.out.println("Task " + id + " on " + Thread.currentThread().getName());
    });
}

pool.shutdown();
pool.awaitTermination(10, TimeUnit.SECONDS);
  • newFixedThreadPool(4) creates four worker threads and an unbounded LinkedBlockingQueue. The first four tasks may run immediately; the rest wait in the queue.
  • submit() returns a Future; the worker runs the task and then takes the next from the queue.
  • shutdown() stops accepting new tasks; awaitTermination(10, TimeUnit.SECONDS) blocks until existing tasks finish or the timeout elapses.

Using Callable for return values

Java
ExecutorService pool = Executors.newFixedThreadPool(4);

Future<Integer> future = pool.submit(() -> {
    Thread.sleep(100);
    return 42;
});

int result = future.get();  // blocks until done
System.out.println(result);  // 42

Handling rejections with a custom pool

Java
ThreadPoolExecutor pool = new ThreadPoolExecutor(
    2, 4, 60, TimeUnit.SECONDS,
    new ArrayBlockingQueue<>(2),
    new ThreadPoolExecutor.CallerRunsPolicy()
);

// When queue is full and all threads busy, caller runs the task
for (int i = 0; i < 20; i++) {
    final int id = i;
    pool.submit(() -> {
        System.out.println("Task " + id);
        try { Thread.sleep(100); } catch (InterruptedException e) {}
    });
}

Executors Factory Methods

FactorycorePoolSizemaximumPoolSizeQueueUse case
newFixedThreadPool(n)nnUnbounded LinkedBlockingQueueFixed concurrency, bursty workload
newCachedThreadPool()0Integer.MAX_VALUESynchronousQueueShort-lived tasks, unbounded threads
newSingleThreadExecutor()11UnboundedSerialized execution
newScheduledThreadPool(n)nInteger.MAX_VALUEDelayedWorkQueueDelayed and periodic tasks
  • Fixed: Predictable, but unbounded queue can grow until OOM under sustained load. Prefer a bounded queue and rejection policy when you need backpressure.
  • Cached: Creates threads on demand, reclaims idle ones. Can create very many threads under load; use with care.
  • Single: All tasks run sequentially on one thread. Useful for tasks that must not run concurrently (e.g. GUI updates, certain state machines).

Core Mechanism / Behavior

  • When you call executor.submit(task), if a worker is free it runs the task immediately; otherwise the task is enqueued (or rejected, depending on pool configuration).
  • Workers loop: take a task from the queue, run it, repeat. When the queue is empty, they block until a task arrives (or until shutdown).
  • Fixed-size pools with unbounded queues never reject tasks by default, but the queue can grow without limit. Use a bounded queue and an explicit rejection policy when you need to limit load.

Key Rules

  • Unbounded queue + fixed pool: Under load the queue can grow without limit and cause OOM. Prefer a bounded queue and a rejection policy when you need backpressure.
  • Using newCachedThreadPool() everywhere: It can create very many threads under load. Use only when you understand the workload and can accept unbounded thread creation.
  • Forgetting to shut down: Pools keep threads alive. Call shutdown() or shutdownNow() when the app or component is done so the JVM can exit cleanly.
  • Size the pool for your workload (CPU-bound vs I/O-bound) and always shut down the pool when it is no longer needed.

What's Next

For tuning pool size, queue capacity, and rejection policies, see Thread Pools in Java (Core Parameters). For avoiding deadlocks when tasks depend on each other, see Deadlock - How to Detect and Prevent.