⚡ Complete Tutorial

Apache JMeter Performance Testing Tutorial

Master performance testing from basics to advanced load scenarios. Learn Thread Groups, Samplers, Assertions, Parameterization, Correlation, and how to generate professional HTML reports.

⏱ ~4 hrs 📚 12 Topics 💻 Code Examples 🎯 Quiz Each Topic ☕ Java + JMeter
01
Foundation

Introduction to JMeter

Definition

Apache JMeter is a free, open-source, 100% Java-based desktop application designed to load-test functional behaviour and measure performance. It can simulate a heavy load on a server, group of servers, network, or object to test its strength and analyse overall performance under different load types.

JMeter was originally developed by Stefano Mazzocchi of the Apache Software Foundation in 1998, primarily to test Apache JServ performance. Today it is one of the most widely used performance testing tools in professional QA teams worldwide.

What Can JMeter Test?

Web (HTTP/HTTPS)
REST APIs, web applications, websites — the most common use case.
Database (JDBC)
SQL queries directly against any database with a JDBC driver.
FTP / LDAP / SMTP
File transfers, directory services, and email server performance.
Message Queues (JMS)
ActiveMQ, RabbitMQ and other JMS-compliant brokers.
WebSocket / TCP
Real-time applications and raw TCP protocol testing via plugins.

Types of Performance Testing

Load Testing
Test system behaviour under expected normal and peak load conditions.
Stress Testing
Push the system beyond normal load to find breaking point and failure modes.
Spike Testing
Sudden large increase in load — tests system recovery.
Soak Testing
Run at normal load for extended time to detect memory leaks and degradation.
Scalability Testing
Incrementally increase load to determine at what point performance degrades.
Analogy

JMeter is like a traffic simulation tool for your application. Just as traffic engineers simulate thousands of cars on a road to find bottlenecks, JMeter simulates thousands of virtual users hitting your server simultaneously to find performance bottlenecks before real users experience them.

ℹ️
JMeter is NOT a Browser

JMeter works at the protocol level (HTTP, FTP, JDBC etc.). It does not execute JavaScript, render CSS, or load images the way a browser does. It simulates the network-level requests that a browser would make. For browser-based performance testing, tools like Playwright or Selenium Grid are used alongside JMeter.

🧠Quick Quiz — Topic 1

Which type of performance testing pushes the system beyond its normal operating capacity to find the breaking point?

02
Core Concepts

JMeter Architecture

Understanding JMeter's internal structure helps you design effective test plans. JMeter follows a hierarchical tree structure where every element has a specific role and order of execution matters.

Test Plan Hierarchy

Test Plan
Root element — container for all other elements. Defines global variables and classpath entries.
Thread Group
Simulates virtual users. Controls number of threads, ramp-up, and loop count.
Samplers
Make actual requests to server (HTTP, JDBC, FTP etc.).
Logic Controllers
Control order/flow of samplers (Loop, If, Random, Interleave etc.).
Config Elements
Provide defaults and variables to samplers (HTTP Defaults, CSV Data Set, Cookies etc.).
Pre-Processors
Execute actions before a sampler request (modify parameters, set env variables).
Post-Processors
Execute after sampler response — used for extracting dynamic values (correlation).
Assertions
Validate server response — check status codes, response body content, size etc.
Timers
Introduce delays between requests to simulate real user think time.
Listeners
Collect and display test results (tables, graphs, tree view, HTML reports).

Execution Order per Request

For each sampler, JMeter follows this strict execution order: Config Elements → Pre-Processors → Timers → Sampler → Post-Processors → Assertions → Listeners. Understanding this order is critical for correlation and parameterization.

💡
Scope Rule

Elements apply to all samplers in their scope. If a Config Element is placed at Thread Group level, it applies to all samplers in that group. If placed directly under a sampler as a child, it applies only to that sampler. Use this to control scope precisely.

Standalone vs. Distributed Mode

Standalone
Single machine generates all load. Suitable for tests up to ~500–1000 threads depending on machine specs.
Distributed (Grid)
One Controller machine coordinates multiple Remote Worker (Agent) machines. Used for large-scale load tests with thousands of threads.
🧠Quick Quiz — Topic 2

In JMeter's execution order, which element runs immediately AFTER a sampler sends its request and receives a response?

03
Setup

Installation & Setup

JMeter requires Java to run. It works on Windows, macOS, and Linux. The current stable version is JMeter 5.6.3 (released January 2025), which requires Java 8 or higher.

Step-by-Step Installation (Windows)

  1. Install Java JDK — Download JDK 11 or higher from adoptium.net (Eclipse Temurin). Run the installer and verify with java -version in Command Prompt.

  2. Set JAVA_HOME — Go to System Properties → Environment Variables → New System Variable: Name = JAVA_HOME, Value = JDK install path (e.g., C:\Program Files\Java\jdk-11).

  3. Download JMeter — Visit jmeter.apache.org/download_jmeter.cgi and download the latest binary zip file (e.g., apache-jmeter-5.6.3.zip).

  4. Extract — Unzip to any folder (e.g., C:\apache-jmeter-5.6.3). This is your JMeter home directory.

  5. Launch JMeter GUI — Navigate to bin folder and run jmeter.bat (Windows) or ./jmeter (Mac/Linux). The JMeter GUI will open.

Shell — Verify Installation
# Verify Java
java -version
# Output: openjdk version "11.0.x"...

# Launch JMeter GUI (from bin folder)
# Windows:
jmeter.bat

# Mac / Linux:
./jmeter.sh

# Launch in CLI (Non-GUI) mode
jmeter -n -t testplan.jmx -l results.csv

JMeter Folder Structure

bin/
JMeter executables, jmeter.bat, jmeter.sh, user.properties, jmeter.properties
lib/
Core JAR files. Add 3rd party JARs here (e.g., JDBC drivers)
lib/ext/
JMeter plugins go here. Drop plugin JARs here and restart JMeter
extras/
Sample Ant build files, XSLT stylesheets for report generation
💡
Install Plugins Manager

Download the JMeter Plugins Manager JAR from jmeter-plugins.org and place it in lib/ext/. After restarting JMeter, go to Options → Plugins Manager to install useful plugins like Custom Thread Groups, 3 Basic Graphs, and more without manually managing JARs.

🧠Quick Quiz — Topic 3

Where should you place 3rd party JAR files (like a JDBC driver) in the JMeter directory?

04
Core Element

Test Plan & Thread Group

The Test Plan is the root container for your entire test. The Thread Group is the most important element — it defines how many virtual users run, how fast they start, and how long they run.

Thread Group Configuration

Number of Threads
Number of virtual users. Each thread runs independently and simulates one real user.
Ramp-Up Period (s)
Time JMeter takes to start all threads. 100 users / 100s ramp-up = 1 new user per second.
Loop Count
How many times each thread repeats the test scenario. Set to "Infinite" for duration-based tests.
Duration (s)
Run the test for a fixed time (e.g., 300s = 5 minutes). Overrides Loop Count.
Startup Delay (s)
Wait X seconds before starting the thread group. Useful for sequencing multiple thread groups.
Analogy

Think of the Thread Group as a bus company. "Number of Threads" = how many buses. "Ramp-Up" = how long to deploy all buses to route. "Loop Count" = how many trips each bus makes. "Duration" = run for exactly 2 hours regardless of trips completed.

On Sampler Error — Actions

Continue
Default. Ignore error, proceed to next sampler in the script.
Start Next Thread Loop
Skip remaining samplers in current iteration and start from the beginning.
Stop Thread
Stop only the current thread that encountered the error.
Stop Test
Gracefully stop all threads after current samplers finish.
Stop Test Now
Abruptly terminate all threads immediately. Use sparingly.
JMX — Thread Group Config (XML inside .jmx file)
<ThreadGroup guiclass="ThreadGroupGui"
             testname="Homepage Load Test">
  <intProp name="ThreadGroup.num_threads">100</intProp>
  <intProp name="ThreadGroup.ramp_time">60</intProp>
  <boolProp name="ThreadGroup.scheduler">true</boolProp>
  <stringProp name="ThreadGroup.duration">300</stringProp>
  <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
</ThreadGroup>
<!-- 100 users, 60s ramp-up, run for 300 seconds -->
💡
Use Multiple Thread Groups

Model different user types as separate Thread Groups. For example: one Thread Group for Shoppers (50 users browsing) and another for Admins (5 users uploading data). Each group can have different ramp-up, loop count, and samplers.

🧠Quick Quiz — Topic 4

You set 50 threads and a ramp-up of 100 seconds. How many new threads will JMeter start per second?

05
Requests

Samplers & Logic Controllers

Samplers are the elements that actually send requests to the server. They are the heart of your test plan. Logic Controllers determine how and when those samplers are executed.

Most Used Samplers

HTTP Request
Sends HTTP/HTTPS GET, POST, PUT, DELETE requests. Most commonly used sampler for web and API testing.
JDBC Request
Sends SQL queries to a database. Requires JDBC driver JAR and JDBC Connection Configuration element.
FTP Request
Tests FTP server performance for file upload/download operations.
JSR223 Sampler
Write custom sampler logic in Groovy, Java, or JavaScript. Very powerful for complex scenarios.
Debug Sampler
Shows all JMeter variables and properties. Used during test development for troubleshooting.
HTTP Request — POST Login API
/* HTTP Request Sampler Settings:
   Method: POST
   Server: api.example.com
   Path:   /auth/login
   Body:   JSON raw body below */

{
  "username": "${username}",
  "password": "${password}"
}

/* Headers (via HTTP Header Manager):
   Content-Type: application/json
   Accept: application/json       */

Important Logic Controllers

Loop Controller
Repeat child samplers N times. Useful for simulating repeated actions like adding items to cart.
If Controller
Execute samplers only if a condition is true (e.g., "${statusCode}"=="200").
Transaction Controller
Groups samplers as a single transaction — reports combined response time. Excellent for business flow measurement.
Random Controller
Randomly selects one child sampler per iteration. Good for simulating unpredictable user behaviour.
Interleave Controller
Alternates through child samplers — each iteration picks the next child in sequence.
💡
HTTP Request Defaults

Add HTTP Request Defaults (Config Element) at Thread Group level. Set the Server Name/IP and Port once here, and all HTTP Request samplers inherit it. When switching between environments (dev/staging/prod), you only need to change one place.

🧠Quick Quiz — Topic 5

Which Logic Controller groups multiple samplers and reports their COMBINED response time as a single transaction?

06
Results

Listeners & Key Metrics

Listeners collect and display test results in various formats. They can be added at Test Plan level (to capture all samplers) or under a specific sampler (to capture only that sampler's data).

⚠️
Disable Listeners During Load Tests

Listeners consume significant CPU and memory. During actual load tests, disable all GUI listeners and use CLI mode with a results file instead. Enabled listeners during a high-load test will skew your results and slow down JMeter itself.

Common Listeners

View Results Tree
Shows each request/response in detail. Use ONLY during script development — never during load tests.
Aggregate Report
Summary table with Samples, Average, Min, Max, 90th/95th/99th percentiles, Error%, Throughput.
Summary Report
Similar to Aggregate Report but lighter. Good for quick overview.
View Results in Table
Shows each sample as a table row with timestamp, elapsed time, response code.
Simple Data Writer
Writes raw results to a .csv or .jtl file. Use this in CLI mode — lightweight, no GUI overhead.
Backend Listener
Streams results in real time to Grafana/InfluxDB or Graphite for live dashboards.

Aggregate Report — Column Meanings

ColumnMeaningGood Target
#SamplesTotal requests sent
AverageMean response time (ms)< 2000ms
Median (50th %ile)50% of responses faster than this< 1000ms
90th %ile90% of responses faster than this< 3000ms
95th %ile95% of responses faster than this< 5000ms
99th %ile99% of responses faster than thisMonitor carefully
Error %Percentage of failed requests< 1%
ThroughputRequests per second handledMaximize
Received KB/sData received per second from server
AverageMean response time (ms) — target <2000ms
90th %ile90% of requests faster than this — target <3000ms
Error %Failed requests — target <1%
ThroughputRequests per second — maximize
🧠Quick Quiz — Topic 6

Which listener should you use during actual load tests in CLI mode to collect results with minimum overhead?

07
Validation

Assertions

Assertions validate that the server's response meets expectations. Without assertions, JMeter marks any response as "passed" — even a 500 error with an error page. Assertions are critical for meaningful performance test results.

Types of Assertions

Response Assertion
Checks response body, URL, response code, headers for a specific text pattern. Most versatile assertion.
JSON Assertion
Validates JSON response using JSONPath expressions. E.g., verify $.status equals "success".
Duration Assertion
Fails the sample if response time exceeds a threshold in milliseconds. Great for SLA enforcement.
Size Assertion
Validates the size of the response in bytes. Useful to detect truncated responses.
XPath Assertion
Validates XML responses using XPath expressions.
JSR223 Assertion
Write custom assertion logic in Groovy. Use for complex business rule validation.
JSON Assertion — Validate API Response
/* JSON Assertion Settings:
   Assert JSON Path exists: $.data.userId
   Additionally assert value: true
   Expected value: ${expectedUserId}

   Response Assertion Settings:
   Field to test: Response Code
   Pattern matching rules: Equals
   Patterns to test: 200          */

/* JSR223 Assertion (Groovy) — Custom logic */
def response = new groovy.json.JsonSlurper()
                  .parseText(prev.getResponseDataAsString())

if (response.status != "success") {
    AssertionResult.setFailure(true)
    AssertionResult.setFailureMessage(
        "Expected status=success, got: " + response.status
    )
}
💡
Scope Assertions Correctly

Add the assertion as a child of a specific sampler to apply it only to that request. If you add it at Thread Group level, it applies to ALL samplers — which is usually not what you want. Always verify assertion failures in View Results Tree during development.

🧠Quick Quiz — Topic 7

You want to fail a test if response time exceeds 3 seconds. Which assertion should you use?

08
Realism

Timers & Think Time

By default, JMeter sends requests as fast as possible — one after another without any delay. This is unrealistic. Real users pause between actions to read, think, and click. Timers simulate this pause — called Think Time.

⚠️
Always Add Think Time

Without timers, JMeter will send thousands of requests per second even with just 10 threads — this creates unrealistically high load and will overwhelm even powerful servers. Always add a timer to model realistic user behaviour.

Types of Timers

Constant Timer
Fixed delay (e.g., 1000ms) between every request. Simple, predictable but not realistic.
Uniform Random Timer
Random delay between a min and max range. E.g., 500–1500ms. More realistic than constant.
Gaussian Random Timer
Delay follows a normal (bell curve) distribution. Most realistic for natural user patterns.
Constant Throughput Timer
Controls requests per minute — maintains a target throughput regardless of response time. Useful for hitting specific RPS targets.
Precise Throughput Timer
More accurate version of Constant Throughput Timer. Preferred in JMeter 5.x.
Synchronizing Timer
Holds threads until a specified number accumulate, then releases all at once. Simulates simultaneous user burst.
Timer — Groovy in JSR223 Timer for Variable Think Time
// JSR223 Timer — think time based on scenario
// Simulates realistic: 1–3 seconds think time

import java.util.Random

Random rand = new Random()
long thinkTime = 1000 + (rand.nextInt(2000)) // 1000–3000ms

log.info("Think time: " + thinkTime + "ms")
return thinkTime // Timer returns delay in ms

/* Constant Throughput Timer:
   Target throughput: 120 requests/minute (= 2 RPS)
   Calculate based on: All active threads in current group
   This ensures JMeter maintains exactly 2 RPS
   regardless of how fast individual requests complete  */
🧠Quick Quiz — Topic 8

You want to test your API at exactly 60 requests per minute consistently. Which timer should you use?

09
Test Data

Parameterization & CSV Data

Parameterization replaces hardcoded values in your test script with variables that can take different values for each virtual user or iteration. This is essential for realistic testing — e.g., 100 users each logging in with different credentials.

Methods of Parameterization

User Defined Variables
Config Element at Test Plan level. Define static variables like base URL, port, environment. Referenced as ${baseUrl}.
CSV Data Set Config
Read rows from a CSV file — each thread/iteration gets the next row. Best for login credentials, product IDs, user data.
JMeter Functions
Built-in functions like ${__Random(1,100)}, ${__time()}, ${__UUID()} for dynamic values.
JSR223 Pre-Processor
Groovy script to compute and set variables before a request. Maximum flexibility.
CSV Data Set Config — Setup & Usage
/* users.csv file (placed in JMeter bin/ or full path):
   username,password,role
   alice,pass123,admin
   bob,secret99,user
   carol,mypass,user
   ...100 more rows... */

/* CSV Data Set Config Settings:
   Filename:      users.csv
   Variable Names: username,password,role
   Delimiter:     , (comma)
   Recycle on EOF: true  (restart from row 1 when file ends)
   Stop Thread on EOF: false
   Sharing mode:  All threads (all threads share the file)  */

/* In HTTP Request Body — reference variables: */
{
  "username": "${username}",
  "password": "${password}"
}

/* In HTTP Header Manager: */
X-User-Role: ${role}

/* JMeter Functions for dynamic data: */
${__Random(1000,9999)}    // random number between 1000-9999
${__UUID()}               // generate unique UUID
${__time(yyyy-MM-dd)}     // current date formatted
${__counter(TRUE,)}       // incrementing counter per thread
Example Scenario

You have 100 virtual users each simulating a login. Without parameterization, all 100 users log in with the same username — which is unrealistic and may trigger rate limiting or session conflicts on the server. With a CSV file containing 100 unique credentials, each virtual user logs in with a different account — truly simulating real user behaviour.

💡
Sharing Mode Matters

In CSV Data Set Config, "All threads" sharing mode means all threads share the file pointer — each thread gets a unique row. "Current thread group" means each thread group has its own file pointer. "Current thread" means each thread reads from row 1 independently — use this when each user should complete the full dataset.

🧠Quick Quiz — Topic 9

In CSV Data Set Config, what does setting "Recycle on EOF: true" do?

10
Dynamic Values

Correlation & Extractors

Correlation is the process of capturing dynamic values from a server response and using them in subsequent requests. Dynamic values like session tokens, CSRF tokens, auth tokens, and user IDs change with every test run — without correlation, the test will fail because it sends stale values.

Correlation vs Parameterization

Parameterization deals with INPUT data you control (usernames, product IDs from CSV). Correlation deals with DYNAMIC data that the server generates and returns (session IDs, tokens, verification codes). Both are essential for realistic load testing scripts.

Extractor Post-Processors

Regular Expression Extractor
Extract values using Perl5 regex patterns from response body, headers, or URL. Most versatile.
JSON Path Extractor
Extract values from JSON responses using JSONPath (e.g., $.data.token). Best for REST APIs.
XPath Extractor
Extract values from XML/HTML responses using XPath expressions.
Boundary Extractor
Extract text between two boundary strings — simpler than regex when delimiters are consistent.
CSS Selector Extractor
Extract values from HTML using CSS selectors. Good for web page testing.
Correlation — Login → Extract Token → Use in Next Request
/* STEP 1: POST /auth/login
   Server Response (JSON):
   {
     "status": "success",
     "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
     "userId": 1042
   }

   Add JSON Path Extractor as child of Login sampler:
   Variable Name:       authToken
   JSON Path:           $.token
   Default Value:       EXTRACTION_FAILED
   Match No. (0=Random): 0                           */

/* STEP 2: GET /api/orders (next request)
   Add HTTP Header Manager:
   Authorization: Bearer ${authToken}

   JMeter automatically substitutes the extracted
   token from the Login response into this request */

/* Regular Expression Extractor example:
   Variable Name:    csrfToken
   RegEx:            name="_csrf" value="(.+?)"
   Template:         $1$
   Match No.:        1                             */

/* Verify extraction in Debug Sampler — it shows:
   JMeterVariables: authToken = eyJhbGciO...
   This confirms the token was correctly captured  */
💡
Set Default Value for Debugging

Always set a Default Value in extractors (e.g., TOKEN_NOT_FOUND). If extraction fails, JMeter uses this default. When you see TOKEN_NOT_FOUND in subsequent requests, you immediately know correlation is broken — rather than silently sending an empty value which is harder to debug.

🧠Quick Quiz — Topic 10

After login, the server returns a JWT token in the response JSON. Which extractor is best suited to capture this token?

11
Execution

CLI Mode & HTML Reports

JMeter's GUI is for building and debugging test scripts only. For actual load tests, always use CLI (Non-GUI) mode. GUI mode consumes extra memory and CPU, which skews results and reduces the load JMeter can generate.

CLI Commands

Shell — JMeter CLI Commands
# Basic CLI run — saves results to CSV
jmeter -n -t testplan.jmx -l results.csv

# Run + generate HTML report in one command
jmeter -n -t testplan.jmx -l results.csv -e -o ./reports/

# CLI flags explained:
# -n   : Non-GUI (headless) mode
# -t   : Path to .jmx test plan file
# -l   : Path to results log file (.csv or .jtl)
# -e   : Generate HTML report after test
# -o   : Output folder for HTML report (must be empty)
# -j   : JMeter log file path

# Run with overriding Thread Group properties
jmeter -n -t testplan.jmx -l results.csv \
  -Jthreads=200 \
  -Jrampup=120 \
  -Jduration=600

# Generate HTML report from existing CSV file
# (without re-running the test)
jmeter -g results.csv -o ./reports/

# Run test in distributed mode
# (controller sends to remote agents)
jmeter -n -t testplan.jmx -R agent1_ip,agent2_ip -l results.csv

HTML Dashboard Report Contents

APDEX Table
Application Performance Index — rates each transaction as Satisfied / Tolerating / Frustrated based on thresholds.
Statistics Table
Samples, Error%, Average, Min, Max, Median, 90th/95th/99th percentile, Throughput per transaction.
Response Times Over Time
Chart showing how average response time changes throughout the test duration.
Throughput Over Time
Requests per second chart — shows if throughput was stable or fluctuating.
Errors Over Time
When errors occurred — helps correlate errors with load spikes.
Active Threads Over Time
Confirms ramp-up pattern was correct and all threads ran as configured.
ℹ️
APDEX Thresholds

Default APDEX thresholds: Satisfied = response time ≤ 500ms. Tolerating = 500ms–1500ms. Frustrated = > 1500ms. You can customize these in bin/user.properties: set jmeter.reportgenerator.apdex_satisfied_threshold=1000 and jmeter.reportgenerator.apdex_tolerated_threshold=4000 to match your SLA.

user.properties — Customize HTML Report
# bin/user.properties — customize report thresholds

# APDEX thresholds (in milliseconds)
jmeter.reportgenerator.apdex_satisfied_threshold=1000
jmeter.reportgenerator.apdex_tolerated_threshold=3000

# Chart granularity (ms) — 1000 = 1 second ticks
jmeter.reportgenerator.overall_granularity=1000

# Custom report title
jmeter.reportgenerator.report_title=Load Test Report - v2.5.1

# Filter to show only specific transactions
jmeter.reportgenerator.exporter.html.series_filter=\
  ^(Login|SearchProducts|AddToCart|Checkout)(-success|-failure)?$
🧠Quick Quiz — Topic 11

You already ran a test and have results.csv. What command generates the HTML report WITHOUT re-running the test?

12
Professional Tips

Best Practices

Following best practices ensures your JMeter tests are accurate, maintainable, and reflect real-world conditions. These are lessons learned from professional QA teams running tests at scale.

Test Design Best Practices

Always use CLI for load tests
GUI mode adds overhead. Use GUI only to build/debug. Run actual load in -n mode.
Add Think Time always
Use Uniform Random or Gaussian Timer. Without it, results are unrealistically aggressive.
Use Transaction Controllers
Group related samplers (Login flow, Checkout flow) for business-level response time metrics.
Parameterize test data
Never hardcode credentials or IDs. Use CSV Data Set Config for realistic multi-user simulation.
Set Default Values in Extractors
Always set a recognizable default (e.g., TOKEN_MISSING) to quickly detect extraction failures.
Verify with 1–2 threads first
Run script with 1 thread in GUI + View Results Tree before scaling up. Catch functional errors early.

JMeter Performance & Memory

Increase Heap Size
Default heap is 1GB. Edit jmeter.bat/jmeter.sh: set -Xms2g -Xmx4g for large thread counts.
Disable Listeners
Right-click → Disable all listeners during load tests. Only Simple Data Writer (to file) should remain active.
Use Groovy over BeanShell
Groovy (JSR223) is compiled and cached — 5–10x faster than BeanShell for heavy scripting.
Use HTTP Request Defaults
Set server/port/protocol once. Switch environments by changing one config element.
Complete Load Test Checklist
## PRE-TEST CHECKLIST
☐ Script verified with 1 thread — zero errors in Results Tree
☐ Correlation verified — tokens/session IDs extracted correctly
☐ CSV data set has enough rows for thread count × iterations
☐ Timers added — think time between requests
☐ Assertions added — verify response code AND response body
☐ HTTP Request Defaults set — easy environment switching
☐ All GUI Listeners disabled — only file writer active
☐ JMeter heap increased — -Xmx4g or higher
☐ Results file cleared/renamed — fresh file for this run

## RUN COMMAND
jmeter -n \
  -t testplan.jmx \
  -l results_$(date +%Y%m%d_%H%M).csv \
  -e -o ./report_$(date +%Y%m%d_%H%M)/ \
  -j jmeter_$(date +%Y%m%d_%H%M).log

## POST-TEST ANALYSIS
☐ Error% < 1% for all transactions
☐ 90th percentile within SLA thresholds
☐ Throughput stable (not degrading over time)
☐ No memory/CPU issues on server under test
☐ Response time did not increase as test progressed
💡
Integrate with CI/CD

Use the JMeter Maven Plugin or Taurus to run JMeter tests as part of your Jenkins/GitHub Actions pipeline. Set performance thresholds (e.g., fail build if error% > 2% or 90th percentile > 3000ms) to automatically detect performance regressions with every code push.

🧠Quick Quiz — Topic 12

Why is Groovy (JSR223) preferred over BeanShell for custom scripting in JMeter?

🎓 Ready to Test at Scale?

Join STAD Solution's QA course and master JMeter, Selenium, and complete performance testing in a professional environment.

Explore Courses →
Introduction to JMeter