Java Backend Coding Technology: Writing Code in the Era of AI
Version: 1.1.0 | Repository: github.com/siy/coding-technology
NOTE: This version is substantiantly updated. Check CHANGELOG.md.
Introduction: Code in a New Era
Software development is changing faster than ever. AI-powered code generation tools have moved from experimental novelty to daily workflow staple in just a few years. We now write code alongside - and increasingly with - intelligent assistants that can generate entire functions, refactor modules, and suggest architectural patterns. This shift creates new challenges that traditional coding practices weren’t designed to…
Java Backend Coding Technology: Writing Code in the Era of AI
Version: 1.1.0 | Repository: github.com/siy/coding-technology
NOTE: This version is substantiantly updated. Check CHANGELOG.md.
Introduction: Code in a New Era
Software development is changing faster than ever. AI-powered code generation tools have moved from experimental novelty to daily workflow staple in just a few years. We now write code alongside - and increasingly with - intelligent assistants that can generate entire functions, refactor modules, and suggest architectural patterns. This shift creates new challenges that traditional coding practices weren’t designed to handle.
Historically, code has carried a heavy burden of personal style. Every developer brings preferences about naming, structure, error handling, and abstraction. Teams spend countless hours in code review debating subjective choices. Style guides help, but they can’t capture the deeper structural decisions that make code readable or maintainable. When AI generates code, it inherits these same inconsistencies - we just don’t know whose preferences it’s channeling or why it made particular choices.
This creates a context problem. When you read AI-generated code, you’re reverse-engineering decisions made by a model trained on millions of examples with conflicting styles. When AI reads your code to suggest changes, it must infer your intentions from the structure that may not clearly express them. The cognitive overhead compounds: developers burn mental cycles translating between their mental model, the code’s structure, and what the AI “thinks” the code means.
Meanwhile, technical debt accumulates silently. Small deviations from the good structure - a validation check here, an exception there, a bit of mixed abstraction levels - seem harmless in isolation. But they compound. Refactoring becomes risky. Testing becomes difficult. The codebase becomes a collection of special cases rather than a coherent system.
Traditional approaches don’t provide clear, mechanical rules for when to refactor or how to structure new code, so these decisions remain subjective and inconsistent.
This technology proposes a different approach: reduce the space of valid choices until there’s essentially one good way to do most things. Not through rigid frameworks or heavy ceremony, but through a small set of rules that make structure predictable, refactoring mechanical, and business logic clearly separated from technical concerns.
The benefits compound:
Unified structure means humans can read AI-generated code without guessing about hidden assumptions, and AI can read human code without inferring structure from context. A use case looks the same whether you wrote it, your colleague wrote it, or an AI assistant generated it. The structure carries the intent.
Minimal technical debt emerges naturally because refactoring rules are built into the technology. When a function grows beyond one clear responsibility, the rules tell you exactly how to split it. When a component gets reused, there’s one obvious place to move it. Debt doesn’t accumulate because prevention is cheaper than cleanup.
Close business modeling happens when you’re not fighting technical noise. Value objects enforce domain invariants at construction time. Use cases read like business processes because each step does one thing. Errors are domain concepts, not stack traces. Product owners can read the code structure and recognize their requirements.
Requirement discovery becomes systematic. When you structure code as validation → steps → composition, gaps become obvious. Missing validation rules surface when you define value objects. Unclear business logic reveals itself when you can’t name a step clearly. Edge cases emerge when you model errors as explicit types. The structure itself asks the right questions: What can fail here? What invariants must hold? What happens when this is missing? Validating answers for compatibility is mechanical - if a new requirement doesn’t fit the existing step structure, you know immediately whether it’s a new concern or a modification to existing logic.
Asking correct questions becomes easy because the technology provides a framework for inquiry. When discussing requirements with domain experts, you can ask: “What validation rules apply to this field?” (maps to value object factories). “What happens if this step fails?” (maps to error types). “Can these operations run in parallel?” (maps to Fork-Join vs. Sequencer). “Is this value optional or required?” (maps to Option<T>
vs T
). The questions are grounded in structure, not abstraction, so answers are concrete and immediately implementable.
Business logic as a readable language happens when patterns become vocabulary. The four return types, parse-don’t-validate, and the fixed pattern catalog form a Business Logic Expression Language - a consistent way to express domain concepts in code. When you use the same patterns everywhere, business logic becomes immediately apparent in all necessary details. The structure itself tells the story: a Sequencer shows process steps, Fork-Join reveals parallel operations, Result<Option<T>>
declares “optional but must be valid when present.” Anyone who somewhat understands the domain can pick up a new codebase virtually instantly. No more narrow specializations where only one developer understands “their” module. A large part of the code becomes universally readable. Fresh onboarding happens in days, not months - developers spend time learning the domain, not deciphering structural choices.
Tooling and automation become dramatically simpler when the structure is predictable. Code generators don’t need to infer patterns - there’s one pattern for validation, one for composition, one for error handling. Static analysis can verify properties mechanically: does this function return exactly one of the four allowed types? Does validation happen before construction? Are errors properly typed? AI assistants can generate more accurate code because the target structure is well-defined and consistent.
Deterministic code generation becomes possible when the mapping from requirements to code is mechanical. Given a use case specification - inputs, outputs, validation rules, steps - there’s essentially one correct structure. Different developers (or AI assistants) should produce nearly identical implementations. This isn’t about stifling creativity; it’s about channeling creativity into business logic rather than structural decisions.
This guide presents the complete technology: the rules, the patterns, the rationale, and the practices. It’s framework-agnostic by design - these principles work whether you’re building REST APIs with Spring, message processors with plain Java, or anything in between. The framework lives at the edges; the business logic remains pure, testable, and independent.
We’ll start with core concepts - the building blocks that make everything else possible. Then we’ll explore the pattern catalog that covers almost every situation you’ll encounter. A detailed use case walkthrough shows how the pieces fit together. Framework integration demonstrates how to bridge this functional core to the imperative world of web frameworks and databases. Finally, we’ll examine common mistakes and how to avoid them.
The goal isn’t to give you more tools. It’s to give you fewer decisions to make, so you can focus on the problems that actually matter.
Why This Technology Works: The Evaluation Framework
Every rule and pattern in this technology is evaluated against five objective criteria. These replace subjective “readability” arguments with measurable comparisons:
Mental Overhead - “Don’t forget to...” and “Keep in mind...” items you must track. This appears as things developers must remember because the compiler can’t catch them. Lower is better. 1. Business/Technical Ratio - Balance between domain concepts and framework/infrastructure noise. Higher domain visibility with less technical boilerplate is better. 1. Design Impact - Whether an approach improves design consistency or breaks it. Does it enforce good patterns or allow bad ones? 1. Reliability - Does the compiler catch mistakes, or must you remember? Type safety that makes invalid states unrepresentable eliminates entire classes of bugs. 1. Complexity - Number of elements, connections, and especially hidden coupling. Fewer moving parts and explicit dependencies are better. These criteria aren’t preferences - they’re measurable attributes. When we say “don’t use business exceptions,” we can prove why:
-
Mental Overhead: Checked exceptions force signature pollution; unchecked are invisible (+2 for Result-based)
-
Reliability: Exception paths are hidden from type checker; Result makes them explicit (+1 for Result-based)
-
Complexity: Exception hierarchies create cross-package coupling (+1 for Result-based) Similarly, “parse don’t validate”:
-
Mental Overhead: No “remember to validate” - invalid states are unrepresentable (+1)
-
Reliability: Compiler enforces validity through types, not runtime checks (+1)
-
Design Impact: Business invariants encoded in type system, not scattered (+1) Throughout this guide, major rules reference these criteria. The goal: replace endless “best practices” with five measurable standards.
Core Concepts
Note: This section uses Pragmatica Lite Core library as an underlying functional style library. The library is available on Maven Central: https://central.sonatype.com/artifact/org.pragmatica-lite/core
<dependency> <groupId>org.pragmatica-lite</groupId> <artifactId>core</artifactId> <version>0.8.0</version> </dependency>
The Four Return Kinds
Every function in this technology returns exactly one of four types. Not “usually” or “preferably” - exactly one, always. This isn’t arbitrary restriction; it’s intentional compression of complexity into type signatures.
Why by criteria:
- Mental Overhead: Hidden error channels (exceptions), hidden optionality (null), hidden asynchrony (blocking I/O) all force developers to remember behavior not expressed in signatures. Explicit return types eliminate this (+3).
- Reliability: Compiler verifies error handling, null safety, and async boundaries when encoded in types (+3).
- Complexity: Four types cover all scenarios - no guessing about combinations or special cases (+2).
T
- Synchronous, cannot fail, value always present.
Use this when the operation is pure computation with no possibility of failure or missing data. Mathematical calculations, transformations of valid data, simple getters. If you can’t think of a way this function could fail or return nothing, it returns T
.
public record FullName(String value) {
public String initials() { // returns String (T)
return value.chars()
.filter(Character::isUpperCase)
.collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append)
.toString();
}
}
Option<T>
- Synchronous, cannot fail, value may be missing.
Use this when absence is a valid outcome, but failure isn’t possible. Lookups that might not find anything, optional configuration, nullable database columns when null is semantically meaningful (not just “we don’t know”). The key: missing data is normal business behavior, not an error.
// Finding an optional user preference
public interface PreferenceRepository {
Option<Theme> findThemePreference(UserId id); // might not be set
}
Result<T>
- Synchronous, can fail, represents business or validation errors.
Use this when an operation might fail for business or validation reasons. Parsing input, enforcing invariants, business rules that can be violated. Failures are represented as typed Cause
objects, not exceptions. Every failure path is explicit in the return type.
public record Email(String value) {
private static final Pattern EMAIL_PATTERN = Pattern.compile("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+$");
private static final Fn1<Cause, String> INVALID_EMAIL = Causes.forValue("Invalid email format: {}");
public static Result<Email> email(String raw) {
return Verify.ensure(raw, Verify.Is::notNull)
.map(String::trim)
.flatMap(Verify.ensureFn(INVALID_EMAIL, Verify.Is::matches, EMAIL_PATTERN))
.map(Email::new);
}
}
Promise<T>
- Asynchronous, can fail, represents eventual success or failure.
Use this for any I/O operation, external service call, or computation that might block. Promise<T>
is semantically equivalent to Result<T>
but asynchronous - failures are carried in the Promise itself, not nested inside it. This is Java’s answer to Rust’s Future<Result<T>>
without the nesting problem.
public interface AccountRepository {
Promise<Account> findById(AccountId id); // async lookup, can fail
}
Why exactly four?
These four types form a complete basis for composition. You can lift “up” when needed (Option
to Result
to Promise
), but you never nest the same concern twice (Promise<Result<T>>
is forbidden). Each type represents one orthogonal concern:
- Synchronous vs. asynchronous (now vs. later)
- Can fail vs cannot fail (error channel present or absent)
- Value vs optional value (presence guaranteed or not)
Traditional Java mixes these concerns. A method returning
User
might throw exceptions (hidden error channel), return null (hidden optionality), or block on I/O (hidden asynchrony). You can’t tell from the signature. With these four types, the signature tells you everything about the function’s behavior before you read a line of implementation.
This clarity is what makes AI-assisted development tractable. When generating code, an AI doesn’t need to infer whether error handling is needed - the return type declares it. When reading code, a human doesn’t need to trace execution paths to find hidden failure modes - they’re in the type signature.
Parse, Don’t Validate
Most Java code validates data after construction. You create an object with raw values, then call a validate()
method that might throw exceptions or return error lists. This is backwards.
The principle: Make invalid states unrepresentable. If construction succeeds, the object is valid by definition. Validation is parsing - converting untyped or weakly-typed input into strongly typed domain objects that enforce invariants at the type level.
Why by criteria:
- Mental Overhead: No “remember to validate” - type system guarantees validity (+2).
- Reliability: Compiler enforces that invalid objects cannot be constructed (+3).
- Design Impact: Business invariants concentrated in factories, not scattered across codebase (+2).
- Complexity: Single validation point per type eliminates redundant checks (+1). Traditional validation:
// DON'T: Validation separated from construction
public class Email {
private final String value;
public Email(String value) {
this.value = value; // accepts anything
}
public boolean isValid() { // The caller must remember to check
return value != null && value.matches("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+$");
}
}
// Client code must validate manually:
Email email = new Email(input);
if (!email.isValid()) {
throw new ValidationException("Invalid email");
}
Problems: You can construct invalid Email
objects. Validation is a separate step that callers might forget. The isValid()
method returns a boolean, discarding information about what’s wrong. You can’t distinguish “null” from “malformed” from “too long” without checking conditions individually.
Parse-don’t-validate approach:
// DO: Validation IS construction
public record Email(String value) {
private static final Pattern EMAIL_PATTERN = Pattern.compile("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+$");
private static final Fn1<Cause, String> INVALID_EMAIL = Causes.forValue("Invalid email format: {}");
public static Result<Email> email(String raw) {
return Verify.ensure(raw, Verify.Is::notNull)
.map(String::trim)
.flatMap(Verify.ensureFn(INVALID_EMAIL, Verify.Is::matches, EMAIL_PATTERN))
.map(Email::new);
}
}
// Client code gets the Result:
Result<Email> result = Email.email(input);
// If this is a Success, the Email is valid. Guaranteed.
The constructor is private (or package-private). The only way to get an Email
is through the static factory email()
, which returns Result<Email>
. If you have an Email
instance, it’s valid - no separate check needed. The type system enforces this.
Note: As of current Java versions, records do not support declaring the canonical constructor as private. This limitation means the constructor remains accessible within the same package. Future Java versions may address this. Until then, rely on team discipline and code review to ensure value objects are only constructed through their factory methods. The good news: violations are highly visible in code - since all components are normally constructed via factory methods, any direct new Email(...)
call stands out immediately. This makes the issue easy to catch using automated static analysis checks or by instructing AI code review tools to flag direct constructor usage for value objects.
Naming convention: Factories are always named after their type, lowercase-first (camelCase). This creates a natural, readable call site: Email.email(...)
, Password.password(...)
, AccountId.accountId(...)
. It’s slightly redundant but unambiguous and grep-friendly. The intentional redundancy enables conflict-free static imports - import static Email.email
allows you to write email(raw)
at call sites while preserving context, since the factory name itself indicates what’s being created.
Optional fields with validation:
What if a field is optional but must be valid when present? For example, a referral code that’s not required but must match a pattern if provided.
Use Result<Option<T>>
- validation can fail (Result), and if it succeeds, the value might be absent (Option).
public record ReferralCode(String value) {
private static final String PATTERN = "^[A-Z0-9]{6}$";
public static Result<Option<ReferralCode>> referralCode(String raw) {
return isAbsent(raw)
? Result.success(Option.none())
: validatePresent(raw);
}
private static boolean isAbsent(String raw) {
return raw == null || raw.isEmpty();
}
private static Result<Option<ReferralCode>> validatePresent(String raw) {
return Verify.ensure(raw.trim(), Verify.Is::matches, PATTERN)
.map(ReferralCode::new)
.map(Option::some);
}
}
If raw
is null or empty, we succeed with Option.none()
. If it’s present, we validate and wrap in Option.some()
. If validation fails, the Result
itself is a failure. Callers get clear semantics: failure means invalid input, success with none()
means no value provided, success with some()
means valid value.
Normalization: Factories can normalize input (trim whitespace, lowercase email domains, etc.) as part of parsing. This keeps invariants in one place and ensures all instances are normalized consistently.
Why this matters for AI: When an AI generates a value object, the structure is mechanical: private constructor, static factory named after type, Result<T>
or Result<Option<T>>
return type, validation via Verify
combinators. No guessing about where validation happens or how errors are reported.
No Business Exceptions
Business failures are not exceptional - they’re expected outcomes of business rules. An invalid email isn’t an exception; it’s a normal case of bad input. An account being locked isn’t an exception; it’s a business state.
The rule: Business logic never throws exceptions for business failures. All failures flow through Result
or Promise
as typed Cause
objects.
Why by criteria:
- Mental Overhead: Checked exceptions pollute signatures (+1 for Result). Unchecked exceptions are invisible - must read implementation (+2 for Result).
- Business/Technical Ratio: Exception stack traces are technical noise; typed Causes are domain concepts (+2 for Result).
- Reliability: Exceptions bypass type checker; Result makes all failures explicit and compiler-verified (+3 for Result).
- Complexity: Exception hierarchies create cross-package coupling (+1 for Result). Traditional exception-based code:
// DON'T: Exceptions for business logic
public User loginUser(String email, String password) throws
InvalidEmailException,
InvalidPasswordException,
AccountLockedException,
CredentialMismatchException {
if (!isValidEmail(email)) {
throw new InvalidEmailException(email);
}
if (!isValidPassword(password)) {
throw new InvalidPasswordException();
}
User user = userRepo.findByEmail(email)
.orElseThrow(() -> new CredentialMismatchException());
if (user.isLocked()) {
throw new AccountLockedException(user.getId());
}
if (!passwordMatches(user, password)) {
throw new CredentialMismatchException();
}
return user;
}
Problems: Checked exceptions pollute signatures and force callers to handle or rethrow. Unchecked exceptions are invisible in signatures - you can’t tell what might fail without reading implementation. Exception hierarchies create coupling. Stack traces are expensive and often irrelevant for business failures. Testing requires catching exceptions and inspecting types.
Result-based code:
// DO: Failures as typed values
public Result<User> loginUser(String emailRaw, String passwordRaw) {
return Result.all(Email.email(emailRaw),
Password.password(passwordRaw))
.flatMap(this::validateAndCheckStatus);
}
private Result<User> validateAndCheckStatus(Email email, Password password) {
return checkCredentials(email, password)
.flatMap(this::checkAccountStatus);
}
private Result<User> checkCredentials(Email email, Password password) {
return userRepo.findByEmail(email)
.flatMap(user -> validatePassword(user, password));
}
private Result<User> validatePassword(User user, Password password) {
return passwordMatches(user, password)
? Result.success(user)
: LoginError.InvalidCredentials.INSTANCE.result();
}
private Result<User> checkAccountStatus(User user) {
return user.isLocked()
? new LoginError.AccountLocked(user.id()).result()
: Result.success(user);
}
Every failure is a Cause
. The LoginError
is a sealed interface defining the failure modes:
public sealed interface LoginError extends Cause {
record AccountLocked(UserId userId) implements LoginError {
@Override
public String message() {
return "Account is locked: " + userId;
}
}
enum InvalidCredentials implements LoginError {
INSTANCE;
@Override
public String message() {
return "Invalid email or password";
}
}
}
Failures compose: Result.all(Email.email(...), Password.password(...))
collects validation failures into a CompositeCause
automatically. If both email and password are invalid, the caller gets both errors, not just the first one encountered.
Adapter exceptions: Foreign code (libraries, frameworks, databases) throws exceptions. Adapter leaves catch these and convert them to Cause
objects.
The Pragmatica library provides lift()
methods for each monad type to handle exception-to-Cause conversion:
public interface UserRepository {
Promise<Option<User>> findByEmail(Email email);
}
// Implementation (adapter leaf)
class JpaUserRepository implements UserRepository {
public Promise<Option<User>> findByEmail(Email email) {
return Promise.lift(
RepositoryError::fromDatabaseException,
() -> entityManager.createQuery("SELECT u FROM User u WHERE u.email = :email", UserEntity.class)
.setParameter("email", email.value())
.getResultList()
.stream()
.findFirst()
.map(this::toDomain)
.orElse(Option.none())
);
}
}
The lift()
methods handle try-catch boilerplate and exception-to-Cause conversion automatically or via provided exception-to-cause mapping function. Each monad type provides its own lift()
method: Option.lift()
, Result.lift()
, and Promise.lift()
. The adapter wraps checked PersistenceException
in a domain Cause
(RepositoryError.DatabaseFailure
). Business logic never sees PersistenceException
- only domain errors.
Why this matters: Errors are just data. You compose them with map
, flatMap
, and all()
like any other value. Testing is easy - assert on Cause
types without catching exceptions. AI can generate error handling mechanically because the pattern is always the same: SomeCause.INSTANCE.result()
or SomeCause.INSTANCE.promise()
.
Single Pattern Per Function
Every function implements exactly one pattern from a fixed catalog: Leaf, Sequencer, Fork-Join, Condition, or Iteration. (Aspects are the exception - they decorate other patterns.)
Why? Cognitive load. When reading a function, you should recognize its shape immediately. If it’s a Sequencer, you know it chains dependent steps linearly. If it’s Fork-Join, you know it runs independent operations and combines results. Mixing patterns within a function creates mixed abstraction levels and forces readers to hold multiple mental models simultaneously.
This rule has a mechanical benefit: it makes refactoring deterministic. When a function grows beyond one pattern, you extract the second pattern into its own function. There’s no subjective judgment about “is this too complex?” - if you’re doing two patterns, split it.
Why by criteria:
- Mental Overhead: One pattern per function means immediate recognition - no mental model switching (+2).
- Complexity: Mechanical refactoring rule eliminates subjective debates about “too complex” (+2).
- Design Impact: Forces proper abstraction layers - no mixing orchestration with computation (+2).
Single Level of Abstraction
The rule: No complex logic inside lambdas. Lambdas passed to map
, flatMap
, and similar combinators may contain only:
- Method references (e.g.,
Email::new
,this::processUser
) - Single method calls with parameter forwarding (e.g.,
param -> someMethod(outerParam, param)
) Why? Lambdas are composition points, not implementation locations. When you bury logic inside a lambda, you hide abstraction levels and make the code harder to read, test, and reuse. Extract complex logic to named functions - the name documents intent, the function becomes testable in isolation, and the composition chain stays flat and readable.
Why by criteria:
- Mental Overhead: Flat composition chains scan linearly - no descending into nested logic (+2).
- Business/Technical Ratio: Named functions document intent; anonymous lambdas hide it (+2).
- Complexity: Each function testable in isolation; buried lambda logic requires testing through container (+2). Anti-pattern:
// DON'T: Complex logic inside lambda
return fetchUser(userId)
.flatMap(user -> {
if (user.isActive() && user.hasPermission("admin")) {
return loadAdminDashboard(user)
.map(dashboard -> {
var summary = new Summary(
dashboard.metrics(),
dashboard.alerts().stream()
.filter(Alert::isUrgent)
.toList()
);
return new Response(user, summary);
});
} else {
return AccessError.InsufficientPermissions.INSTANCE.promise();
}
});
This lambda contains: conditional logic, nested map, stream processing, object construction. Mixed abstraction levels. Hard to test. Hard to read.
Correct approach:
// DO: Extract to named functions
return fetchUser(userId)
.flatMap(this::checkAdminAccess)
.flatMap(this::loadAdminDashboard)
.map(this::buildResponse);
private Promise<User> checkAdminAccess(User user) {
return user.isActive() && user.hasPermission("admin")
? Promise.success(user)
: AccessError.InsufficientPermissions.INSTANCE.promise();
}
private Promise<Dashboard> loadAdminDashboard(User user) {
return dashboardService.loadDashboard(user);
}
private Response buildResponse(Dashboard dashboard) {
var urgentAlerts = filterUrgentAlerts(dashboard.alerts());
var summary = new Summary(dashboard.metrics(), urgentAlerts);
return new Response(dashboard.user(), summary);
}
private List<Alert> filterUrgentAlerts(List<Alert> alerts) {
return alerts.stream()
.filter(Alert::isUrgent)
.toList();
}
Now the top-level chain reads linearly: fetch → check access → load dashboard → build response. Each step is named, testable, and at a single abstraction level.
Allowed simple lambdas:
Method reference:
// DO: Method reference
.map(Email::new)
.flatMap(this::saveUser)
.map(User::id)
Single method call with parameter forwarding:
// DO: Simple parameter forwarding
.flatMap(user -> checkPermissions(requiredRole, user))
.map(order -> calculateTotal(taxRate, order))
Forbidden in lambdas:
No ternaries (they are the Condition pattern, violates Single Pattern per Function):
// DON'T: Ternary in lambda (violates Single Pattern per Function)
.flatMap(user -> user.isPremium()
? applyPremiumDiscount(user)
: applyStandardDiscount(user))
// DO: Extract to the named function
.flatMap(this::applyApplicableDiscount)
private Result<Discount> applyApplicableDiscount(User user) {
return user.isPremium()
? applyPremiumDiscount(user)
: applyStandardDiscount(user);
}
No conditionals whatsoever:
// DON'T: Any conditional logic in lambda
.flatMap(user -> {
if (user.isPremium()) {
return applyPremiumDiscount(user);
} else {
return applyStandardDiscount(user);
}
})
// DO: Extract to the named function
.flatMap(this::applyApplicableDiscount)
Why this matters for AI: Single level of abstraction makes code generation deterministic. When an AI sees a flatMap
, it knows to generate either a method reference or a simple parameter-forwarding lambda - nothing else. No decisions about “is this ternary simple enough?” When reading code, the AI can parse the top-level structure without descending into nested lambda logic. Humans benefit identically: scan the chain to understand flow, dive into named functions only when needed.
Example violation:
// DON'T: Mixing Sequencer and Fork-Join
public Result<Report> generateReport(ReportRequest request) {
return ValidRequest.validate(request)
.flatMap(valid -> {
// Sequencer starts here
var userData = fetchUserData(valid.userId());
var salesData = fetchSalesData(valid.dateRange());
// Wait, now we're doing Fork-Join?
return Result.all(userData, salesData)
.flatMap((user, sales) -> computeMetrics(user, sales))
.flatMap(this::formatReport); // Back to Sequencer
});
}
This function starts as a Sequencer (validate → fetch user → fetch sales → compute → format), but fetchUserData
and fetchSalesData
are independent, so we suddenly do a Fork-Join in the middle. Mixed abstraction levels. Hard to test. Unclear at a glance what the function does.
Corrected:
// DO: One pattern per function
public Result<Report> generateReport(ReportRequest request) {
return ValidRequest.validate(request)
.flatMap(this::fetchReportData)
.flatMap(this::computeMetrics)
.flatMap(this::formatReport);
}
private Result<ReportData> fetchReportData(ValidRequest request) {
// This function is a Fork-Join
return Result.all(fetchUserData(request.userId()),
fetchSalesData(request.dateRange()))
.map(ReportData::new);
}
Now generateReport
is a pure Sequencer (validate → fetch → compute → format), and fetchReportData
is a pure Fork-Join. Each function has one clear job.
Mechanical refactoring: If you’re writing a Sequencer and realize step 3 needs to do a Fork-Join internally, extract step 3 into its own function that implements Fork-Join. The original Sequencer stays clean.
Monadic Composition Rules
The four return kinds compose via map
, flatMap
, filter
, and aggregation combinators (all
, any
). Understanding when to lift and how to avoid nesting is essential.
Lifting: You can lift a “lower” type into a “higher” one at call sites:
T
→Option<T>
(viaOption.option(value)
)T
→Result<T>
(viaResult.success(value)
)T
→Promise<T>
(viaPromise.success(value)
)Option<T>
→Result<T>
(viaoption.toResult(cause)
oroption.await(cause)
)Option<T>
→Promise<T>
(viaoption.async(cause)
oroption.async()
)Result<T>
→Promise<T>
(viaresult.async()
) You lift when composing functions that return different types:
// Sync validation (Result) lifted into async flow (Promise)
public Promise<Response> execute(Request request) {
return ValidRequest.validate(request)
.async() // Result has dedicated async() method to convert to Promise
.flatMap(step1::apply) // step1 returns Promise
.flatMap(step2::apply); // step2 returns Promise
}
Forbidden nesting: Promise<Result<T>>
is not allowed. Promise<T>
already carries failures - nesting Result
inside creates two error channels and forces callers to unwrap twice. If a function is async and can fail, it returns Promise<T>
, period.
Wrong:
// DON'T: Nested error channels
Promise<Result<User>> loadUser(UserId id) { /* ... */ }
// Caller must unwrap twice:
loadUser(id)
.flatMap(resultUser -> resultUser.match(
user -> Promise.success(user),
Cause::promise
)); // Absurd ceremony
Right:
// DO: One error channel
Promise<User> loadUser(UserId id) { /* ... */ }
// Caller just chains:
return loadUser(id).flatMap(nextStep);
Allowed nesting: Result<Option<T>>
is permitted sparingly for “optional value that can fail validation.” This represents: “If present, must be valid. If absent, that’s fine.” Example: optional referral code that must match a pattern when provided.
Result<Option<ReferralCode>> refCode = ReferralCode.referralCode(input);
// Success(None) = not provided, valid
// Success(Some(code)) = provided and valid
// Failure(cause) = provided but invalid
Avoid Option<Result<T>>
- it means “maybe there’s a result, and that result might have failed,” which is backwards. Just use Result<Option<T>>
.
Aggregation: Use Result.all(...)
or Promise.all(...)
to combine multiple independent operations:
// Validation: collect multiple field validations
Result<ValidRequest> validated = Result.all(Email.email(raw.email()),
Password.password(raw.password()),
ReferralCode.referralCode(raw.referralCode()))
.flatMap(ValidRequest::new);
// Async: run independent queries in parallel
Promise<Report> report = Promise.all(userRepo.findById(userId),
orderRepo.findByUser(userId),
inventoryService.getAvailableItems())
.flatMap(this::generateReport);
If any input fails, all()
fails immediately (fail-fast for Promise) or collects failures (CompositeCause for Result).
Why these rules? They prevent complexity explosion. With exactly four return types and clear composition rules, you can always tell how to combine two functions by looking at their signatures. AI code generation becomes mechanical - given input and output types, there’s one obvious way to compose.
Patterns Reference
Leaf
Definition: A Leaf is the smallest unit of processing - a function that does one thing and has no internal steps. It’s either a business leaf (pure computation) or an adapter leaf (I/O or side effects).
Rationale (by criteria):
- Mental Overhead: Atomic operations have no internal steps to track - immediate comprehension (+2).
- Business/Technical Ratio: Business leaves are pure domain logic; adapter leaves isolate technical concerns (+2).
- Complexity: Single responsibility per leaf - no hidden interactions (+2).
- Reliability: Pure business leaves are deterministic and easily testable (+1). Business leaves are pure functions that transform data or enforce business rules. Common examples:
// Simple calculation leaf
public static Price calculateDiscount(Price original, Percentage rate) {
return original.multiply(rate);
}
// Domain rule enforcement leaf
public static Result<Unit> checkInventory(Product product, Quantity requested) {
return product.availableQuantity().isGreaterThanOrEqual(requested)
? Result.unitResult()
: InsufficientInventory.cause(product.id(), requested);
}
// Data transformation leaf
public static OrderSummary toSummary(Order order) {
return new OrderSummary(
order.id(),
order.totalAmount(),
order.items().size()
);
}
If there’s no I/O and no side effects, it’s a business leaf. Keep each leaf focused on one transformation or one business rule.
Adapter leaves integrate with external systems: databases, HTTP clients, message queues, file systems. They map foreign errors to domain Causes:
public interface UserRepository {
Promise<Option<User>> findByEmail(Email email);
}
// Adapter leaf implementation
class PostgresUserRepository implements UserRepository {
private final DataSource dataSource;
public Promise<Option<User>> findByEmail(Email email) {
return Promise.lift(
e -> RepositoryError.DatabaseFailure.cause(e),
() -> {
try (var conn = dataSource.getConnection();
var stmt = conn.prepareStatement("SELECT * FROM users WHERE email = ?")) {
stmt.setString(1, email.value());
var rs = stmt.executeQuery();
return rs.next() ? mapUser(rs) : null;
}
}
).map(Option::option);
}
private User mapUser(ResultSet rs) throws SQLException {
// Mapping logic; SQLException handled by Promise.lift()
return new User(/* ... */);
}
}
The adapter catches SQLException
and wraps it in RepositoryError.DatabaseFailure
, a domain Cause
. Callers never see SQLException
.
Placement: If a leaf is only used by one caller, keep it nearby (same file, same package). If it’s reused, move it immediately to the nearest shared
package. Don’t defer - tech debt accumulates when shared code stays in wrong locations.
Anti-patterns:
DON’T mix abstraction levels in a leaf:
// DON'T: This "leaf" is actually doing multiple steps
public static Result<Email> email(String raw) {
var normalized = raw.trim().toLowerCase();
if (!isValid(normalized)) {
logValidationFailure(normalized); // Side effect!
return EmailError.INVALID.result();
}
return Result.success(new Email(normalized));
}
This leaf has a side effect (logging) mixed with validation logic. Extract logging to an Aspect decorator if needed.
DON’T let adapter leaves leak foreign types:
// DON'T: SQLException leaks into business logic
Promise<Option<User>> findByEmail(Email email) throws SQLException {
// Business logic should never see SQLException
}
Wrap all foreign exceptions in domain Causes within the adapter.
Framework independence: Adapter leaves form the bridge between business logic and framework-specific code. This isolation is critical for maintaining framework-agnostic business logic. Strongly prefer adapter leaves for all I/O operations (database access, HTTP calls, file system operations, message queues). This ensures you can swap frameworks (Spring → Micronaut, JDBC → JOOQ) without touching business logic - only rewrite the adapters.
However, dependencies on specific libraries for business functionality (encryption libraries, complex mathematical computations, specialized algorithms) are acceptable within business logic when they’re essential to the domain. The key distinction: I/O adapters isolate infrastructure choices; domain libraries implement business requirements.
DO keep leaves focused:
public record Email(String value) {
private static final Pattern EMAIL_PATTERN = Pattern.compile("^[a-z0-9+_.-]+@[a-z0-9.-]+$");
private static final Fn1<Cause, String> INVALID_EMAIL = Causes.forValue("Invalid email");
// DO: One clear responsibility
public static Result<Email> email(String raw) {
return Verify.ensure(raw, Verify.Is::notNull)
.map(String::trim)
.map(String::toLowerCase)
.flatMap(Verify.ensureFn(INVALID_EMAIL, Verify.Is::matches, EMAIL_PATTERN))
.map(Email::new);
}
}
Linear flow, clear responsibility, no side effects, foreign errors properly wrapped.
Sequencer
Definition: A Sequencer chains dependent steps linearly using map
and flatMap
. Each step’s output feeds the next step’s input. This is the primary pattern for use case implementation.
Rationale (by criteria):
- Mental Overhead: Linear flow, 2-5 steps fits short-term memory capacity - predictable structure (+3).
- Business/Technical Ratio: Steps mirror business process language - reads like requirements (+3).
- Complexity: Fail-fast semantics, each step isolated and testable (+2).
- Design Impact: Forces proper step decomposition, prevents monolithic functions (+2). The 2-5 rule: A Sequencer should have 2 to 5 steps. Fewer than 2, and it’s probably just a Leaf. More than 5, and it needs decomposition - extract sub-sequencers or group steps.
The rule is intended to limit local complexity. It is derived from the average size of short-term memory - 7 +- 2 elements.
Domain requirements take precedence: Some functions inherently require more steps because the domain demands it. Value object factories may need multiple validation and normalization steps to ensure invariants - this is correct because the validation logic must be concentrated in one place. Fork-Join patterns may need to aggregate 6+ independent results because that’s what the domain requires. Don’t artificially fit domain logic into numeric rules. The 2-5 guideline helps you recognize when to consider refactoring, but domain semantics always win.
Sync example:
public interface ProcessOrder {
record Request(String orderId, String paymentToken) {}
record Response(OrderConfirmation confirmation) {}
Result<Response> execute(Request request);
interface ValidateInput {
Result<ValidRequest> apply(Request raw);
}
interface ReserveInventory {
Result<Reservation> apply(ValidRequest req);
}
interface ProcessPayment {
Result<Payment> apply(Reservation reservation);
}
interface ConfirmOrder {
Result<Response> apply(Payment payment);
}
static ProcessOrder processOrder(
ValidateInput validate,
ReserveInventory reserve,
ProcessPayment processPayment,
ConfirmOrder confirm
) {
record processOrder(
ValidateInput validate,
ReserveInventory reserve,
ProcessPayment processPayment,
ConfirmOrder confirm
) implements ProcessOrder {
public Result<Response> execute(Request request) {
return validate.apply(request) // Step 1
.flatMap(reserve::apply) // Step 2
.flatMap(processPayment::apply) // Step 3
.flatMap(confirm::apply); // Step 4
}
}
return new processOrder(validate, reserve, processPayment, confirm);
}
}
Four steps, each a single-method interface. The execute()
body reads top-to-bottom: validate → reserve → process payment → confirm. Each step returns Result<T>
, so we chain with flatMap
. If any step fails, the chain short-circuits and returns the failure.
Async example (same structure, different types):
public Promise<Response> execute(Request request) {
return ValidateInput.validate(request) // returns Result<ValidInput>
.async() // lift to Promise<ValidInput>
.flatMap(reserve::apply) // returns Promise<Reservation>
.flatMap(processPayment::apply) // returns Promise<Payment>
.flatMap(confirm::apply); // returns Promise<Response>
}
Validation is synchronous (returns Result
), so we lift it to Promise
using .async()
. The rest of the chain is async.
When to extract sub-sequencers:
If a step grows complex internally, extract it to its own interface with a nested structure. Suppose processPayment
actually needs to: authorize card → capture funds → record transaction. That’s three dependent steps - a Sequencer. Extract:
// Original step interface
interface ProcessPayment {
Promise<Payment> apply(Reservation reservation);
}
// Implementation delegates to a sub-sequencer
class CreditCardPaymentProcessor implements ProcessPayment {
private final AuthorizeCard authorizeCard;
private final CaptureFunds captureFunds;
private final RecordTransaction recordTransaction;
public Promise<Payment> apply(Reservation reservation) {
return authorizeCard.apply(reservation)
.flatMap(captureFunds::apply)
.flatMap(recordTransaction::apply);
}
}
Now CreditCardPaymentProcessor
is itself a Sequencer with three steps. The top-level use case remains a clean 4-step chain.
Anti-patterns:
DON’T nest logic inside flatMap (violates Single Level of Abstraction):
// DON'T: Business logic buried in lambda
return validate.apply(request)
.flatMap(valid -> {
if (valid.isPremiumUser()) {
return applyDiscount(valid)
.flatMap(reserve::apply);
} else {
return reserve.apply(valid);
}
})
.flatMap(processPayment::apply);
The conditional logic is hidden inside the lambda. Extract it:
// DO: Extract to the named function (Single Level of Abstraction)
return validate.apply(request)
.flatMap(this::applyDiscountIfEligible)
.flatMap(reserve::apply)
.flatMap(processPayment::apply);
private Result<ValidRequest> applyDiscountIfEligible(ValidRequest request) {
return request.isPremiumUser()
? applyDiscount(request)
: Result.success(request);
}
DON’T mix Fork-Join inside a Sequencer without extraction:
// DON'T: Suddenly doing Fork-Join mid-sequence (violates Single Pattern + SLA)
return validate.apply(request)
.flatMap(valid -> {
var userPromise = fetchUser(valid.userId());
var productPromise = fetchProduct(valid.productId());
return Promise.all(userPromise, productPromise)
.flatMap((user, product) -> reserve.apply(user, product));
})
.flatMap(processPayment::apply);
Extract the Fork-Join:
// DO: Extract Fork-Join to its own step
return validate.apply(request)
.flatMap(this::fetchUserAndProduct) // Fork-Join inside this step
.flatMap(reserve::apply)
.flatMap(processPayment::apply);
private Promise<ReservationInput> fetchUserAndProduct(ValidRequest request) {
return Promise.all(fetchUser(request.userId()),
fetchProduct(request.productId()))
.map(ReservationInput::new);
}
DO keep the sequence flat and readable:
// DO: Linear, one step per line
return validate.apply(request)
.flatMap(step1::apply)
.flatMap(step2::apply)
.flatMap(step3::apply)
.flatMap(step4::apply);
Fork-Join
Definition: Fork-Join (also known as Fan-Out-Fan-In) executes independent operations concurrently and combines their results. Use it when you have parallel work with no dependencies between branches.
Rationale (by criteria):
- Mental Overhead: Parallel execution explicit in structure - no hidden concurrency (+2).
- Complexity: Independence constraint acts as design validator - forces proper data organization (+3).
- Reliability: Type system prevents dependent operations from being parallelized (+2).
- Design Impact: Reveals coupling issues - dependencies surface as compile errors (+3). Two flavors:
- Result.all(...) - Synchronous aggregation (not concurrent, just collects multiple Results):
// Validating multiple independent fields
Result<ValidRequest> validated = Result.all(Email.email(raw.email()),
Password.password(raw.password()),
AccountId.accountId(raw.accountId()))
.flatMap((email, password, accountId) ->
ValidRequest.create(email, password, accountId)
);
If all succeed, you get a tuple of values to pass to the combiner. If any fail, you get a CompositeCause
containing all failures (not just the first).
- Promise.all(...) - Parallel async execution:
// Running independent I/O operations in parallel
Promise<Dashboard> buildDashboard(UserId userId) {
return Promise.all(userService.fetchProfile(userId),
orderService.fetchRecentOrders(userId),
notificationService.fetchUnread(userId))
.map(this::createDashboard);
}
private Dashboard createDashboard(Profile profile,
List<Order> orders,
List<Notification> notifications) {
return new Dashboard(profile, orders, notifications);
}
All three fetches run concurrently. The Promise completes when all inputs complete successfully or fails immediately if any input fails.
Special Fork-Join cases:
Beyond the standard Result.all()
and Promise.all()
, there are specialized fork-join methods for specific aggregation needs. The parallel execution pattern remains the same, but the outcome differs:
- Promise.allOf(Collection>) - Parallel execution with the resilient collection:
// Fetching data from the dynamic number of sources, collecting all outcomes
Promise<Report> generateSystemReport(List<ServiceId> services) {
var healthChecks = services.stream()
.map(healthCheckService::check)
.toList();
return Promise.allOf(healthChecks)
.map(this::createReport);
}
private Report createReport(List<Result<HealthStatus>> results) {
var successes = results.stream()
.filter(Result::isSuccess)
.map(Result::value)
.toList();
var failures = results.stream()
.filter(Result::isFailure)
.map(Result::cause)
.toList();
return new Report(successes, failures);
}
Returns Promise<List<Result<T>>>
- unlike Promise.all()
which fails fast, allOf()
waits for all promises to complete and collects both successes and failures. Use when you need comprehensive results even if som