Skip to content

Project Leyden vs GraalVM Native Image - A Complete Guide

Published: at 12:00 PM

Table of Contents

Open Table of Contents

Brief

Java’s startup time and memory footprint have been pain points since the language’s inception. Two major approaches have emerged to solve this: GraalVM Native Image and Project Leyden. Both aim to make Java applications start faster and use less memory, but they take fundamentally different approaches.

This post explains both technologies, their histories, how they work, and most importantly—when you should choose one over the other.

The TL;DR Comparison

Before diving deep, here’s a quick comparison for those who need a decision now:

AspectGraalVM Native ImageProject Leyden
ApproachFull AOT compilation to native binaryAOT optimization within the JVM
Java CompatibilityClosed-world; requires configuration for dynamic featuresFull compatibility with all Java features
Startup TimeMilliseconds (fastest possible)40-60% improvement over baseline JVM
Peak PerformanceOften lower than JVM (no JIT at runtime)Matches or exceeds JVM (JIT still available)
Memory Footprint30-50% less than JVMImproved, but still runs on JVM
Build ComplexityHigh (resource-intensive, requires metadata)Low (standard JVM tooling)
MaturityProduction-ready since 2019JEPs shipping since JDK 24 (2025)
Best ForServerless, CLI tools, containers with strict limitsGeneral Java applications, Spring Boot, microservices

GraalVM Native Image

The History

GraalVM’s story begins at Sun Microsystems Laboratories (now Oracle Labs) with the Maxine Virtual Machine project. The goal was ambitious: write a Java virtual machine in Java itself to avoid the problems of developing in C++ and benefit from meta-circular optimizations.

The timeline looks like this:

The Native Image technology is built on what Oracle Labs internally called “Substrate VM”—a runtime designed to execute Java code compiled ahead-of-time into native binaries.

How It Works

GraalVM Native Image takes a fundamentally different approach than the traditional JVM. Instead of interpreting bytecode and JIT-compiling hot paths at runtime, it compiles your entire application to a native executable at build time.

The process involves:

  1. Points-to Analysis: The compiler analyzes your code to determine which classes, methods, and fields are reachable from the entry point
  2. Ahead-of-Time Compilation: All reachable code is compiled to native machine code
  3. Static Initialization: Some initializations can be performed at build time and “baked into” the binary
  4. Bundling: The resulting binary includes a minimal runtime (Substrate VM) and your application code

The key concept here is the closed-world assumption: the compiler must know about all classes and methods at build time. This enables aggressive optimizations but creates challenges with Java’s dynamic features.

# Basic native image build
native-image -jar my-app.jar

# With Spring Boot (using the Gradle plugin)
./gradlew nativeCompile

The Closed-World Trade-off

The closed-world assumption is both GraalVM Native Image’s greatest strength and its biggest limitation. Because the compiler knows exactly what code will run, it can:

However, this means features that rely on runtime dynamism require special handling:

For Spring Boot applications, frameworks like Spring Native (now integrated into Spring Boot 3+) generate these configurations automatically for most cases. But custom reflection or dynamic class loading still requires manual configuration.

Production Realities

GraalVM Native Image is production-ready, with significant adoption from frameworks like Spring Boot, Quarkus, and Micronaut. However, there are practical considerations:

Build Requirements:

Runtime Characteristics:

Debugging and Observability:

Project Leyden

The History

Project Leyden was announced by Mark Reinhold in May 2020 as a response to the growing pressure on Java’s startup time, particularly in cloud-native environments. The project takes its name from the Leyden jar—one of the original devices for storing electrical energy—symbolizing the goal of “storing” computational work for later use.

The project’s stated goal is direct: “improve the startup time, time to peak performance, and footprint of Java programs.”

Unlike GraalVM Native Image, which creates a separate compilation path, Leyden works within the existing JVM infrastructure, building on technologies like Class Data Sharing (CDS) and the existing HotSpot JIT compiler.

The Condenser Model

Project Leyden introduces a concept called condensers—specialized transformers that execute in sequence to optimize application code before or during execution. Think of it as a pipeline of optimization stages:

  1. Source Code → Condenser 1 → Condenser 2 → … → Optimized Runtime

Each condenser can perform transformations that “shift” work from runtime to an earlier phase. The key insight is that many computations performed at startup are deterministic and could be done once and cached.

The CDS and AOT caches are part of a “terminal stage” of this condenser pipeline, generated with standard java commands rather than requiring specialized tooling.

JEPs and Current Status

Project Leyden has been delivering features incrementally through JDK Enhancement Proposals (JEPs):

Delivered:

In Progress:

How It Works

Leyden uses a training run approach. You run your application once in a special mode that records:

This information is stored in an AOT cache that subsequent runs can use to skip work:

# JDK 24: Two-step process
# Step 1: Training run to record configuration
java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf -jar my-app.jar

# Step 2: Create the cache from the configuration
java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf -XX:AOTCache=app.aot

# Production run with the cache
java -XX:AOTCache=app.aot -jar my-app.jar

JDK 25 simplifies this with JEP 514 (AOT Command-Line Ergonomics):

# JDK 25+: One-step cache creation
java -XX:AOTCacheOutput=app.aot -jar my-app.jar

# Production run (same as JDK 24)
java -XX:AOTCache=app.aot -jar my-app.jar

The important distinction from GraalVM: anything not captured in the training run falls back to regular JIT processing. This preserves full Java compatibility—if your application dynamically loads a class that wasn’t seen during training, it still works; it just won’t get the AOT optimization for that class.

Performance Results

JEP 483 alone shows significant improvements. For Spring PetClinic (a representative Spring Boot application loading ~21,000 classes):

The InfoQ coverage of JDK 24’s release reported 40% faster startup for applications using the new AOT class loading features.

Key Differences Explained

Philosophy: Closed World vs. Open World

This is the fundamental architectural difference.

GraalVM Native Image uses a closed-world assumption. At build time, the compiler determines exactly what code can possibly run. Anything not visible at build time cannot be used at runtime. This enables maximum optimization but requires all dynamic behavior to be declared upfront.

Project Leyden maintains an open-world model. Training runs capture common paths and optimize them, but the full JVM is still available at runtime. Unexpected code paths work—they just don’t get the AOT benefits.

This difference has profound implications:

ScenarioGraalVM Native ImageProject Leyden
Undeclared reflectionFails at runtimeWorks (no AOT optimization)
Dynamic class loadingNot supportedWorks (falls back to JIT)
Runtime bytecode generationNot supportedWorks (standard JVM)
Changing startup behaviorRequires rebuildRetrain for optimization

Startup vs. Peak Performance

GraalVM Native Image wins on startup time. There’s no JVM to boot, no bytecode to interpret, no classes to load. The application is running native code immediately.

However, Project Leyden (and JVM in general) typically achieves higher peak throughput. The JIT compiler can optimize based on actual runtime behavior, including optimizations that aren’t possible with static analysis:

With Leyden’s ahead-of-time method profiling (JEP 515), the JIT can begin compiling with good profile data immediately, reducing time to peak performance without sacrificing the JIT’s adaptive optimization capabilities.

Build and Deploy Complexity

GraalVM Native Image has higher build complexity:

Project Leyden uses standard JVM tooling:

Framework Support

Both approaches have strong framework support, but in different ways:

GraalVM Native Image:

Project Leyden:

When to Choose Each

Choose GraalVM Native Image When:

  1. Startup time is critical and measured in milliseconds

    • Serverless functions (AWS Lambda, Azure Functions, Google Cloud Functions)
    • CLI tools that should feel instant
    • Autoscaling applications that need to respond to load spikes immediately
  2. Memory is strictly constrained

    • Small Kubernetes pod limits (128MB-512MB)
    • Edge computing or embedded systems
    • Cost optimization in cloud environments where memory is billed
  3. You control the entire deployment environment

    • You can test thoroughly before production
    • You can rebuild when dependencies change
    • Your team is comfortable with native image constraints
  4. Your application has limited dynamic behavior

    • Well-defined startup paths
    • Minimal use of reflection (or reflection use is predictable)
    • No runtime bytecode generation

Choose Project Leyden When:

  1. You need full Java compatibility

    • Dynamic class loading is required
    • Heavy use of reflection that’s hard to configure
    • Runtime bytecode generation (code generators, proxies)
  2. Peak throughput matters more than startup

    • Long-running services where startup happens once
    • Batch processing with high throughput requirements
    • Applications that benefit from JIT optimization over time
  3. You want incremental improvement without migration

    • Upgrade JDK and get benefits automatically
    • No code changes required
    • No build process changes for basic functionality
  4. Build simplicity is important

    • Standard Java tooling
    • No resource-intensive native compilation
    • Same binary works across compatible JDK versions

The Middle Ground

For many applications, the decision isn’t binary. Consider a microservices architecture:

The Road Ahead

GraalVM Native Image

GraalVM continues to evolve. Recent versions have added:

The trajectory is toward closing the peak performance gap while maintaining startup advantages.

Project Leyden

The project is delivering features incrementally. Beyond the JEPs already delivered:

The vision is a spectrum of optimization levels, from “run normally” to “fully optimized for production deployment,” all using standard JVM tooling.

Conclusion

GraalVM Native Image and Project Leyden represent two valid approaches to the same problem. They’re not mutually exclusive—GraalVM Native Image will likely benefit from Leyden’s work on the JDK, and both projects push Java’s performance boundaries.

GraalVM Native Image is the right choice when you need the absolute fastest startup and smallest footprint, and you’re willing to accept the closed-world constraints and build complexity.

Project Leyden is the right choice when you want significant performance improvements while maintaining full Java compatibility and standard tooling.

For most enterprise Java applications—particularly Spring Boot services—Project Leyden offers a compelling path: upgrade your JDK, run a training phase, and get meaningful startup improvements without changing your code or build process.

The key takeaway: both technologies are production-viable, and understanding their trade-offs lets you make the right choice for your specific use case.


Sources


Previous Post
UUID4 Shouldn't Be Your Primary Key
Next Post
The Ultimate Guide to Spring Web Clients with OAuth2