Why Zero Reflection¶
Understanding FlagZen's commitment to compile-time code generation and its implications.
The Question¶
Many flag frameworks use reflection at runtime to discover and instantiate variant implementations. Why does FlagZen explicitly avoid this?
The answer is performance, GraalVM compatibility, and debuggability.
What "Zero Reflection" Means¶
FlagZen generates all dispatch code at compile time via the annotation processor. The generated bytecode contains:
- Explicit method calls (not
Method.invoke()) - Hardcoded variant-to-factory mappings (not
Class.forName()lookups) - Direct polymorphic dispatch (not dynamic proxy creation)
At runtime, the generated proxy is already compiled Java bytecode. No reflection happens.
The Performance Argument¶
Reflection is Measurable Overhead¶
Let's compare two approaches for a simple flag resolution:
Reflection-based (pseudo-code):
String flagValue = provider.getString("checkout-flow").orElse("");
Class<?> variantClass = Class.forName("com.example." + flagValue + "Checkout");
Constructor<?> ctor = variantClass.getConstructor();
CheckoutFlow variant = (CheckoutFlow) ctor.newInstance();
return variant.execute();
Operations:
Class.forName()— classpath lookup, classloader interactiongetConstructor()— reflective method lookupnewInstance()— reflective constructor invocation
Each operation has overhead: classloader synchronization, metadata parsing, validation, security checks.
Generated-code approach (what FlagZen does):
String flagValue = provider.getString("checkout-flow").orElse("");
Supplier<CheckoutFlow> factory = variantMap.get(flagValue);
if (factory != null) {
return factory.get().execute();
}
throw new UnmatchedVariantException(...);
Operations:
HashMap.get()— constant-time lookup- Method call — direct invocation on JIT-compiled code
- Optional constructor invocation in the supplier
The difference compounds in high-throughput systems. In a backend handling 10,000 requests/second, each with 5 feature flags:
- Reflection approach: 50,000 reflective operations/second. Each
Class.forName()involves classloader locking and metadata parsing. - Generated approach: 50,000 map lookups/second. Constant-time, zero locks.
Empirically:
Class.forName()andgetConstructor()take 100-1000 nanosecondsHashMap.get()takes 10-50 nanoseconds- Over a year, this difference translates to seconds of avoidable CPU time per deployed instance
JIT Optimization¶
Modern Java JIT compilers (HotSpot, GraalVM) optimize generated bytecode aggressively. They can:
- Inline method calls
- Eliminate null checks
- Speculate on branch prediction
- Unroll loops
Reflection methods like invoke() and newInstance() are opaque to the JIT. They appear as black boxes that cannot be inlined or optimized. Generated bytecode, conversely, is transparent and optimizable.
This means:
- Generated dispatch gets faster over time as JIT kicks in
- Reflected dispatch stays slow, bounded by reflection overhead
The GraalVM Native Image Argument¶
GraalVM native image compiles Java applications ahead-of-time (AOT) to native binaries. This is critical for serverless, container, and embedded scenarios where startup time and memory matter.
Reflection in Native Image¶
Reflection is fundamentally incompatible with AOT compilation:
- Classloader is gone (AOT builds a closed world)
Class.forName()cannot work — there is no dynamic class loadinggetConstructor()cannot introspect; there is no runtime metadata
To use reflection in GraalVM native image, you must declare reflective targets in reflect-config.json, a static configuration file:
[
{
"name": "com.example.ClassicCheckout",
"methods": [{ "name": "<init>", "parameterTypes": [] }]
},
{
"name": "com.example.PremiumCheckout",
"methods": [{ "name": "<init>", "parameterTypes": [] }]
}
// ... and so on for every variant
]
This creates a maintenance burden:
- Every new variant requires updating the config
- Config can drift from code
- Large config files slow down compilation
- Hard to automate discovery
Generated Code in Native Image¶
FlagZen's generated bytecode is already compiled Java code. It needs no special configuration:
// Generated code: plain bytecode, no reflection
public class CheckoutFlow_FlagZenProxy implements CheckoutFlow {
public String execute() {
String flagValue = provider.getString("checkout-flow").orElse("");
Supplier<CheckoutFlow> factory = variants.get(flagValue);
if (factory != null) {
return factory.get().execute();
}
throw new UnmatchedVariantException(...);
}
}
The AOT compiler sees straightforward method calls and constant maps. No configuration needed.
This is why GraalVM native image is a primary use case for FlagZen. Serverless platforms (AWS Lambda, Google Cloud Run) increasingly rely on native image. Reflection-heavy frameworks face significant barriers.
The Debuggability Argument¶
When debugging a flag dispatch issue, you want to see actual code, not a black box reflection invocation.
Reflection-based Debugging¶
In a debugger, stepping into a reflection-based variant resolution shows:
... (application code)
> Class.forName("com.example.PremiumCheckout")
// Stepping into Method.invoke(...) shows HotSpot internals,
// not your code
< returns proxy instance
You cannot directly see or step into the variant's constructor or methods. Reflection operates at a meta level that is opaque.
Stack traces are confusing:
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:...)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:...)
at java.lang.reflect.Method.invoke(Method.java:...)
at com.example.FlagDispatcher.dispatch(FlagDispatcher.java:42)
The actual variant code is buried under reflection machinery.
Generated Code Debugging¶
With generated code, your debugger shows your code:
... (application code)
> CheckoutFlow_FlagZenProxy.execute()
> variantMap.get(flagValue)
< Supplier[PremiumCheckout]
> factory.get() // Creates the instance
> variant.execute() // Calls the real method
Stack traces are clean:
at com.example.PremiumCheckout.execute(PremiumCheckout.java:42)
at com.example.CheckoutFlow_FlagZenProxy.execute(CheckoutFlow_FlagZenProxy.java:18)
at com.example.OrderService.processOrder(OrderService.java:27)
You see the actual execution path, not reflection internals.
Trade-Offs: What We Give Up¶
Zero-reflection design is not free. FlagZen trades off:
1. Code Size¶
Reflection approach: Minimal runtime code. All variants are discovered dynamically.
Generated approach: More bytecode. Each feature generates a proxy class.
Example overhead: A project with 50 features generates ~50 proxy classes (~500 KB of bytecode). For most applications, this is negligible. For mobile clients or embedded systems, it might matter.
2. Dynamic Discovery¶
Reflection approach: Add a new @Variant class, and it is discovered at runtime automatically.
Generated approach: The annotation processor must run at build time. New variants require recompilation.
This is not a problem in CI/CD (recompilation is standard), but for truly dynamic plugin systems, it is limiting.
3. Cross-Module Variants¶
If feature A is defined in module 1 and variant B is defined in module 2 (compiled separately), reflection-based approaches can discover the variant at runtime. FlagZen's annotation processor runs per-compilation-unit, so cross-module variant discovery is deferred to Release 2 (via startup validation).
When Reflection Makes Sense¶
Not all use cases benefit from zero reflection. FlagZen's choice is the right one for backend systems, services, and GraalVM targets, where:
- Performance matters (every microsecond saved scales across millions of requests)
- Native image is a requirement or aspiration
- Variants are compiled together with features
- Debuggability is valued
Reflection-based flags might be better for:
- Plugins where discovery is dynamic and unavoidable
- Systems where code size is constrained and dynamic discovery saves bytes
- Development tools where compilation is not always available
FlagZen is designed for the former case: production backend systems.
How We Enforce Zero Reflection¶
The FlagZen codebase uses architectural enforcement to prevent accidental reflection:
ArchUnit test (in build):
@AnalyzeClasses(packages = "com.flagzen")
public class ArchitectureTests {
@ArchTest
static ArchRule noReflection =
noClasses()
.that()
.resideInAPackage("com.flagzen..")
.should()
.dependOnClassesThat()
.resideInAPackage("java.lang.reflect..");
}
This test fails the build if any flagzen-core code imports or uses java.lang.reflect.*.
Generated proxy code is also verified to contain no reflection imports.
The Cost-Benefit Summary¶
| Aspect | Reflection | Generated (FlagZen) |
|---|---|---|
| Performance | 1000-10,000 ns overhead per dispatch | 10-50 ns overhead |
| GraalVM native image | Requires configuration; brittle | Works out of the box |
| Debuggability | Opaque reflection frames | Clear, inspectable code |
| Code size | Minimal | Modest (one proxy per feature) |
| Dynamic discovery | Yes; variants auto-discovered | Compile-time only |
| Cross-module variants | Possible | Deferred to Release 2 |
For production backend systems, the benefit is worth the cost.
Comparison with Other Frameworks¶
Most feature flag frameworks use reflection:
- LaunchDarkly SDK: Reflection for custom types
- OpenFeature: Reflection for provider registration
- Togglz: Reflection for feature discovery
- Spring Security: Heavy reflection for configuration
FlagZen is unusual in committing fully to compile-time code generation. This design choice is inherited from Apache DeltaSpike and inspired by Quarkus' build-time optimization philosophy.
Further Reading¶
- Architecture Explanation — how code generation works end-to-end
- Design Decisions — rationale for other architectural choices
- GraalVM Docs: Reflection in Native Image