Loko
December 10, 2025

Looking Back at Java Concurrency

Posted on December 10, 2025  •  9 minutes  • 1914 words  • Other languages:  Korean
Table of contents

0. Introduction

Embarrassingly, I first encountered the keyword “concurrency” this year while completing a backend bootcamp. To be honest, during the bootcamp, I was able to sufficiently learn how to control concurrency by practicing various lock methods. However, rather than just focusing on solving problems, I felt that gaining a fundamental understanding of “concurrency” would help me develop a professional perspective as a backend developer going forward. Therefore, I decided to explore what concurrent programming is and how methods of handling concurrency in the Java ecosystem have evolved.

1. The Free Lunch Is Over (2004)

In December 2004, Herb Sutter, a developer at Microsoft, published an article titled The Free Lunch Is Over . The article begins with this powerful statement:

The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.

Herb Sutter explains that Moore’s Law, which claimed in 1965 that “the number of transistors integrated into semiconductors doubles every year,” is gradually reaching its limits. In August 2001, Intel chips had a CPU clock speed of 2GHz, but by December 2004 when this article was published, CPUs with 4GHz speeds had not yet appeared. While 4GHz clock speeds seemed achievable in the near future, could we really achieve 10GHz? Unfortunately, when too many transistors are integrated into a limited semiconductor volume, problems arise with heat generation, power consumption, and current leakage, making it increasingly difficult to achieve higher clock speeds. That’s why even in 2025, the maximum clock speed that high-performance CPUs can achieve remains around 6GHz.

Herb Sutter emphasizes that the “free lunch” where developers could automatically improve performance just by requesting computer upgrades every year ended 1-2 years ago, and concurrent programming will become increasingly important to handle the exponentially growing CPU throughput demands of modern applications. He also mentions that programming languages like Java desperately need concurrency programming models at the language level.

2. JSR-133/166 (2004)

In line with this trend, the JCP (Java Community Process) approved JSR (Java Specification Request) 133 and 166 in 2004. Through these two specifications, the Java language firmly established itself as a language capable of concurrent programming. Let’s briefly look at what changes were made.

JSR-133

In fact, the Java language has been able to handle multithreading through the JMM (Java Memory Model) since version 1.0. However, serious flaws existed, such as being able to change the value of final fields or not allowing reordering for synchronization.

Reordering: The order of operations can differ from the flow in code due to various factors such as compilers, JIT, caches, etc. To perform synchronization, the compiler, runtime, and hardware must conspire (cooperate) as if it were serial execution, and this process is called ‘reordering’

Therefore, JSR-133 aimed to improve the flaws in JMM and ensure that volatile, synchronized, and final keywords work intuitively. It sought to help developers confidently reason about how multithreaded programs interact with memory and provide implementation across various well-known architectures with both correctness and high performance. The main improvements are as follows:

JSR-166

JSR-166, led by spec lead Doug Lea, shifted the concurrency paradigm from ’language support’ to ’library support’. While JSR-133 provided a memory model at the low-level language layer, JSR-166 provided abstraction tools at a high level. Abstraction tools were provided with the java.util.concurrent package, and the core components are as follows:

The message from JSR-166 is clear: “Don’t handle threads directly; use high-level abstracted concurrency libraries.”

3. Akka / RxJava

Although thread abstraction techniques were provided at the Java language level, there is a chronic problem with this approach. Specifically, due to the nature of lock-based programming, race conditions inevitably occur, and therefore scalability limits exist in high-concurrency environments. As a result, paradigms based on the principle “if you don’t share, you don’t need to synchronize” emerged in the JVM ecosystem in the late 2000s.

Akka (2009)

Akka operates based on the Actor model. In 1973, Carl Hewitt published the Actor model theory at MIT, and based on this, Erlang was implemented in 1986. Jonas Bonér implemented this Actor model philosophy as a Scala and JVM framework, which is Akka.

The Actor model does not use locks but instead enforces encapsulation. It consists of cooperating entities that react to signals, and the entire application operates by sending signals to each other. This is similar to how the world actually communicates. The key concepts of Akka are as follows:

RxJava (2013)

In 2009, Microsoft developed Rx.NET, establishing the Reactive Programming pattern. Based on this, Netflix developed RxJava in 2013, porting the ReactiveX pattern to the JVM. RxJava code looks like this:

Observable<String> videos = getVideos()
    .flatMap(video -> getMetadata(video))
    .filter(metadata -> metadata.rating > 4.0)
    .map(metadata -> metadata.title)
    .timeout(1, TimeUnit.SECONDS);

Using RxJava allows executing calls to services in parallel and composing the results. I’ve also summarized the core concepts of RxJava:

4. Reactive Manifesto (2013)

Since the concurrency revolution began, various approaches proliferated as described above. Jonas Bonér, who founded Akka, discovered commonalities among different solutions and soon organized and published principles with colleagues.

Reactive Manifesto

The Reactive Manifesto contains four core principles.

Responsive

Resilient

Elastic

Message Driven

5. Spring WebFlux (2017)

Spring WebFlux was released with Spring Framework 5.0. This is a Reactive Stack Web Framework that differs from the existing Servlet-based Spring Web MVC and has the following characteristics:

Spring WebFlux ensures fast responsiveness with non-blocking I/O-based asynchronous processing and utilizes asynchronous data streams (Message Driven) using Mono/Flux, so it can be considered a Reactive programming framework. This seems to have established how concurrency is handled in the Java ecosystem at the framework level.

6. Project Loom (2023)

The problems that Java 21’s Project Loom aimed to solve are as follows:

In fact, a Netflix developer reportedly said:

We spent more time debugging reactive chains than we saved in scalability.

Project Loom provides innovative features while removing these problems:

Project Loom moved concurrency management from the OS kernel center to the JVM center, allowing developers to have complete control over scheduling, memory management, and observation. It provided an alternative that can reduce the learning curve for the Reactive approach and ensured debuggability.

7. Conclusion

The Free Lunch Is Over recognized concurrency-related problems, and JSR-133/166 provided language-level solutions. Akka/RxJava took experimental approaches to handling concurrency in the Java ecosystem, and the Reactive philosophy was finally established through the Reactive Manifesto. Spring WebFlux integrated Reactive into the framework, and Java fundamentally redesigned concurrency-related problems through Project Loom. Although 20 years have passed since the publication of The Free Lunch Is Over, the “concurrency revolution” still seems ongoing. However, this time I was able to learn what core values emerged by observing the process of changing paradigms for handling concurrency, and I gained perspective and insight on how to approach concurrent programming in the future.

References

Contact me

email: nmin1124@gmail.com