Introduction
Performance testing often comes too late in the Software Development Lifecycle, either after the code is merged, deployed, or when something starts slowing down in production.
But what if performance testing doesn’t have to wait until the end? What if it could run right inside your Spring Boot CI/CD pipeline, every time the code changes?
That’s the essence of Shift Left Performance Testing, bringing load and latency validation closer to developers. And when you combine Gatling (for load simulation) with mocked dependencies (for stability), you get both speed and consistency in your performance results.
The Problem
One of the biggest challenges with API performance testing is uncontrolled variables:
- External APIs fluctuate in response time
- Databases may h…
Introduction
Performance testing often comes too late in the Software Development Lifecycle, either after the code is merged, deployed, or when something starts slowing down in production.
But what if performance testing doesn’t have to wait until the end? What if it could run right inside your Spring Boot CI/CD pipeline, every time the code changes?
That’s the essence of Shift Left Performance Testing, bringing load and latency validation closer to developers. And when you combine Gatling (for load simulation) with mocked dependencies (for stability), you get both speed and consistency in your performance results.
The Problem
One of the biggest challenges with API performance testing is uncontrolled variables:
- External APIs fluctuate in response time
- Databases may have inconsistent caching or data sizes
- Network latency varies per environment
When these factors change, your test results become inconsistent. You’re left wondering: Is my API slow, or was it the dependency this time?
To catch genuine performance regressions, you need stable and repeatable test conditions, which means mocking what’s not under test.
⚙️ Setting the Ground for Controlled Perf Testing
To get predictable results, mock or simplify dependencies before running load tests.
Mock External APIs with WireMock
If your Spring Boot API calls other services, say for authentication, pricing, or inventory, mock those dependencies using WireMock or any other mocking framework.
Wiremock example:
@AutoConfigureWireMock(port = 8089)
@SpringBootTest
class ProductServiceIntegrationTest {
@BeforeEach
void setupMocks() {
stubFor(get(urlEqualTo("/inventory/123"))
.willReturn(aResponse()
.withFixedDelay(200) // Simulate 200ms latency
.withHeader("Content-Type", "application/json")
.withBody("{\"available\": true}")));
}
}
Now, every test run behaves exactly the same - same delay, same data, same outcome. That’s controlled performance testing.
Note - You can also use Wiremock’s recording feature to record the response in a file so you don’t have to stub large object responses.
Use H2 for Database Mocking
When your test focuses on the application or API layer, you don’t always need a full production-grade database.
Using an in-memory database like H2 ensures consistency and isolation:
spring:
datasource:
url: jdbc:h2:mem:testdb
driver-class-name: org.h2.Driver
username: test
password:
jpa:
hibernate:
ddl-auto: update
You can preload the same dataset before each run for reproducibility. It eliminates variability from query performance, network I/O, and DB load.
Gatling: Performance as Code
Once your environment is stable, define your performance tests in Gatling.
Maven Configuration for Gatling (Java DSL)
pom.xml
<project>
<dependencies>
<!-- Gatling core and HTTP modules -->
<dependency>
<groupId>io.gatling</groupId>
<artifactId>gatling-java</artifactId>
<version>3.11.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.gatling</groupId>
<artifactId>gatling-http</artifactId>
<version>3.11.5</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Gatling Maven Plugin -->
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>4.9.2</version>
<executions>
<execution>
<phase>verify</phase>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Performance test
You can create a separate folder in the project structure and organize all of the performance tests. Treat performance test just like production code.
import io.gatling.javaapi.core.*;
import io.gatling.javaapi.http.*;
import static io.gatling.javaapi.core.CoreDsl.*;
import static io.gatling.javaapi.http.HttpDsl.*;
public class ProductApiSimulation extends Simulation {
// Define the HTTP protocol
HttpProtocolBuilder httpProtocol = http
.baseUrl("http://localhost:8080") // Base URL of your Spring Boot service
.acceptHeader("application/json");
// Define the scenario
ScenarioBuilder scn = scenario("Get Products Scenario")
.exec(
http("Get All Products")
.get("/api/products")
.check(status().is(200))
);
{
setUp(
scn.injectOpen(
rampUsers(50).during(2 * 60), // ✅ Warm-up over 2 mins
constantUsersPerSec(20).during(15 * 60) // ✅ Sustain load for 15 mins
)
)
.protocols(httpProtocol)
.maxDuration(17 * 60); // ✅ Hard stop for safety
// ✅ Add assertions for automated performance gating
.assertions(
global().responseTime().percentile(95).lt(500), // 95% under 500ms
global().successfulRequests().percent().gt(98.0) // Error rate < 2%
);
}
}
Code Explanation
-
rampUsers(50).during(2 * 60)→ Simulates a gradual load (50 users in 120 seconds). -
constantUsersPerSec(20).during(15 * 60)→ maintain the load for 15 mins, quite obviously this is a high number, but we need to strike a balance between pipeline duration and enough load duration to perform the performance test on the application. -
check(status().is(200)) → Verifies each request returns HTTP 200.
-
Assertions → Define performance thresholds. If breached:
-
The test fails,
-
The pipeline breaks,
-
Developers are notified before the code moves to the upper environment.
To avoid a long pipeline run:
✅ Run smoke performance tests on every PR for (30–60 sec) ✅ Run long tests only on:
- main branch
- nightly builds
- release candidates
GitHub action example
on:
push:
branches:
- main
schedule:
- cron: "0 2 * * *" # nightly
Why Assertions Matter
These assertions turn your performance test into a quality gate. If response times exceed 500 ms or error rates go beyond 2%, the test fails automatically.
This instantly alerts developers that a recent change has caused a performance regression — before it ever reaches production.
No manual analysis, no post-deployment surprises.
🚀 Running the Test
Once configured, you can run performance tests using:
mvn gatling:test
This will:
- Launch your Spring Boot API (if already running locally or in CI)
- Run the Gatling simulation (ProductApiSimulation.java)
- Generate an HTML report under:
target/gatling/productapisimulation-<timestamp>/index.html
Fail the build automatically if assertions fail (e.g., high latency, low success rate)
🧩 Bringing It into the CI/CD Pipeline
The real power of Shift Left testing comes when performance runs automatically in your pipeline — just like unit or integration tests.
Here’s an example using GitHub Actions:
name: Performance Tests
on:
push:
branches:
- "release/*" # Run only on release branches to avoid long runs on every commit
- "main"
workflow_dispatch: # Allow manual trigger
jobs:
performance-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v3
with:
java-version: "17"
distribution: "temurin"
- name: Build Spring Boot app
run: ./mvnw clean package -DskipTests
- name: Run Gatling performance tests
run: ./mvnw gatling:test
This setup: ✅ Runs automatically on main or release branches ✅ Can be triggered manually for controlled environments ✅ Produces Gatling HTML reports as pipeline artifacts ✅ Fails the pipeline automatically if performance thresholds are breached
To keep pipelines lean, run these tests only on main, release, or nightly builds — not every feature branch.
Why This Approach Works
When you combine stable mocks, assertions, and CI integration:
- You get consistent metrics across builds
- You isolate application performance from dependency noise
- You catch regressions early with automatic thresholds
- You build confidence without extending pipeline times unnecessarily
This is how teams move from reactive performance firefighting to proactive performance assurance.
🏁 Wrapping Up
Shift-left performance testing isn’t about running massive load tests earlier; it’s about running smarter, smaller, and stable tests continuously.
By combining:
- Spring Boot for your core service
- WireMock for predictable external calls
- H2 for stable DB interactions
- Gatling for performance-as-code
- Assertions to enforce performance budgets
- CI/CD filters to run tests only where needed
You achieve repeatable, reliable, and developer-owned performance validation.
That’s not just testing earlier — it’s building performance culture into the pipeline.
⚡ TL;DR
- Shift left = move performance testing closer to code commits.
- Mock dependencies (WireMock, H2) → get stable, repeatable results.
- Use Gatling → define performance as code.
- Add assertions → fail builds when thresholds break.
- Configure CI/CD → run only on main/release branches.
- Focus on early detection, not end-stage firefighting.
If you have reached here, then I have made a satisfactory effort to keep you reading. Please be kind enough to leave any comments or share with corrections.
My Other Blogs:
- To Avoid Performance Impact Never Use Spring RestClient Default Implementation in Production
- When Resilience Backfires: Retry and Circuit Breaker in Spring Boot
- Setup GraphQL Mock Server
- Supercharge Your E2E Tests with Playwright and Cucumber Integration
- When Should You Use Server-Side Rendering (SSR)?
- Cracking Software Engineering Interviews
- Test SOAP Web Service using Postman Tool