Topic 1 of 120%
☕ Java Testing Framework

Selenium
TestNG Tutorial

Master TestNG - the most popular Java testing framework used with Selenium. Learn every annotation, assertion, DataProvider, parallel execution, and listener concept with real-world examples.

⏱️ ~4 hrs ☕ Java 🎯 12 Topics 💡 Real-world examples 🧪 Quiz each section
01
Introduction
What is TestNG?
TestNG (Test Next Generation) is an open-source Java testing framework inspired by JUnit and NUnit. Created by Cédric Beust in 2004, it was designed to overcome JUnit's limitations. It supports unit, functional, integration, and end-to-end testing. "NG" stands for Next Generation. It is released under the Apache License 2.0.
🧠 Real-World Analogy: Imagine you are a movie director. JUnit is like a simple script — it can tell actors what to do, but there's no way to say "Actor A should enter only after Actor B has finished". TestNG is like a full production plan — it controls who acts, when they act, in what order, with what data, and how many times. That's exactly what TestNG does for your test methods.

Why was TestNG created? — JUnit's limitations:

JUnit limitation
In JUnit 3, test methods had to start with "test" keyword (e.g., testLogin). TestNG removes this restriction — any method name works when you use @Test annotation.
No parallel testing
JUnit (early versions) had no built-in parallel test execution. TestNG supports running tests across multiple threads and browsers simultaneously — essential for Selenium cross-browser testing.
No grouping
JUnit couldn't group tests as "smoke" or "regression". TestNG's groups feature lets you tag tests and run specific groups (e.g., run only smoke tests before deployment).
No data-driven testing
JUnit had no built-in mechanism to run the same test with multiple data sets. TestNG's @DataProvider solves this elegantly.
No dependency management
JUnit couldn't say "run test B only if test A passed". TestNG's dependsOnMethods attribute enables smart test ordering.

Key advantages of TestNG:

  • Flexible annotations — Rich set of @Before/@After annotations at Suite, Test, Class, and Method levels giving precise control over setup and teardown logic.
  • Built-in reporting — TestNG generates HTML + XML reports automatically. No extra setup needed. Shows passed, failed, and skipped test counts with execution time.
  • Data-driven testing — @DataProvider lets you run the same test method with multiple input sets (e.g., test login with 10 different username/password combinations).
  • Parallel execution — Run tests simultaneously across multiple browsers, devices, or threads. Massively reduces execution time for large test suites.
  • Re-run failed tests — After a test run, TestNG generates a testng-failed.xml file automatically. You can re-run only the failed tests without changing any code.
  • Seamless tool integration — Works with Maven, Gradle, Eclipse, IntelliJ IDEA, Jenkins, and all major CI/CD tools out of the box.
🧪 Quiz: What does "NG" stand for in TestNG?
02
Setup
Setup & Installation
TestNG can be added to a Java project via Maven, Gradle, or manual JAR download. In professional automation projects, Maven is the standard approach. You add a dependency in pom.xml and Maven downloads TestNG automatically.

Method 1 — Maven (Recommended for professional projects):

pom.xml — Add TestNG dependency
<dependencies>
    <!-- TestNG dependency — always use latest stable version -->
    <dependency>
        <groupId>org.testng</groupId>
        <artifactId>testng</artifactId>
        <version>7.9.0</version>    <!-- latest stable as of 2024 -->
        <scope>test</scope>         <!-- scope=test: only needed for testing -->
    </dependency>
</dependencies>

<!-- Configure Maven Surefire Plugin to use testng.xml -->
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>3.2.5</version>
            <configuration>
                <suiteXmlFiles>
                    <suiteXmlFile>testng.xml</suiteXmlFile>
                </suiteXmlFiles>
            </configuration>
        </plugin>
    </plugins>
</build>

Method 2 — Install TestNG in Eclipse IDE:

  • 1
    Help → Eclipse Marketplace Search for "TestNG" → Click "Install TestNG for Eclipse" → Restart Eclipse. This adds TestNG plugin so you can right-click and run TestNG tests directly from IDE.
  • 2
    For IntelliJ IDEA TestNG is bundled by default in IntelliJ IDEA. Just add the Maven dependency and you're ready to go. No additional plugin needed.

Your first TestNG test class — structure:

FirstTestNGTest.java
import org.testng.Assert;
import org.testng.annotations.Test;

public class FirstTestNGTest {

    @Test
    public void verifyGoogleTitle() {
        // Real-world example: verify a string equals expected
        String actualTitle = "Google";
        Assert.assertEquals(actualTitle, "Google");
        System.out.println("Test passed! Title is: " + actualTitle);
    }

    @Test
    public void verifyMathOperation() {
        int result = 10 + 5;
        Assert.assertEquals(result, 15, "Math result should be 15");
    }
}
💡
How to run: In Eclipse — Right-click the class → Run As → TestNG Test. TestNG generates a report in the test-output folder. Open test-output/index.html in a browser to see the full HTML report with pass/fail/skip counts.
🧪 Quiz: In pom.xml, why is the TestNG dependency's scope set to "test"?
03
Core Concept
TestNG Annotations — Complete Reference
Annotations are special markers prefixed with @ that tell TestNG when and how to execute a method. Without annotations, TestNG doesn't know which methods are tests, which are setup methods, and which are teardown methods. TestNG looks for annotations — not method names.

All TestNG annotations with real-world purpose:

@Test
Marks a method as a test case. The core annotation. Without this, TestNG ignores the method. Every test you want to run must have @Test.
@BeforeSuite
Runs once before the entire test suite starts. Real-world use: Open a database connection, start a mock server, initialize global config that ALL tests will share.
@AfterSuite
Runs once after the entire test suite finishes. Real-world use: Close the database connection, shut down the mock server, send an email report.
@BeforeTest
Runs before any test methods in a <test> tag in testng.xml. Real-world use: Launch the browser once for all tests in that XML <test> block.
@AfterTest
Runs after all test methods in a <test> tag finish. Real-world use: Close the browser after all tests in that block.
@BeforeClass
Runs once before the first @Test in the class. Real-world use: Initialize WebDriver, set base URL, configure global timeouts — once per class.
@AfterClass
Runs once after all @Test methods in the class finish. Real-world use: Quit WebDriver, close the browser window, clear test data created by this class.
@BeforeMethod
Runs before every single @Test method. Real-world use: Navigate to the login page, log in, set up a fresh state before EACH test — so tests don't affect each other.
@AfterMethod
Runs after every single @Test method. Real-world use: Take screenshot on failure, clear cookies, reset application state after EACH test.
@BeforeGroups
Runs before the first test in a specific group. Real-world use: Set up test data specifically needed by the "payment" group of tests.
@AfterGroups
Runs after the last test in a specific group finishes. Real-world use: Clean up test data created by the "payment" group.
@DataProvider
Marks a method that supplies test data to @Test methods. Returns Object[][]. Real-world use: Provide multiple username/password combos to a login test.
@Parameters
Passes parameters from testng.xml to test methods. Real-world use: Pass browser name (Chrome/Firefox) from XML to run the same tests on different browsers.
@Listeners
Attaches a Listener class to the test. Real-world use: Automatically take a screenshot whenever a test fails.
@Factory
Creates instances of test classes dynamically. Used for advanced scenarios where tests need to run with different configurations of the class itself.

Complete working example with all key annotations:

LoginTest.java — Real Selenium + TestNG example
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.Assert;
import org.testng.annotations.*;

public class LoginTest {

    private WebDriver driver;

    @BeforeClass
    public void setUp() {
        // Runs ONCE before all tests in this class
        // Real-world: Initialize WebDriver
        driver = new ChromeDriver();
        driver.manage().window().maximize();
        System.out.println("Browser launched");
    }

    @BeforeMethod
    public void navigateToLoginPage() {
        // Runs BEFORE every @Test — fresh start for each test
        driver.get("https://example.com/login");
        System.out.println("Navigated to login page");
    }

    @Test
    public void validLogin() {
        // Test Case 1: Valid login
        System.out.println("Running: validLogin");
        // ... selenium steps here
        Assert.assertTrue(true, "Login should succeed");
    }

    @Test
    public void invalidPasswordLogin() {
        // Test Case 2: Wrong password
        System.out.println("Running: invalidPasswordLogin");
        Assert.assertTrue(true, "Should show error message");
    }

    @AfterMethod
    public void clearCookies() {
        // Runs AFTER every @Test — clean state for next test
        driver.manage().deleteAllCookies();
        System.out.println("Cookies cleared");
    }

    @AfterClass
    public void tearDown() {
        // Runs ONCE after all tests finish — close browser
        if (driver != null) {
            driver.quit();
            System.out.println("Browser closed");
        }
    }
}
⚠️
Common mistake: Using @BeforeTest when you mean @BeforeMethod. They are different! @BeforeTest runs before the entire <test> XML block (not before each method). If you want to run setup before EACH test case, always use @BeforeMethod. This is a very common interview question.
🧪 Quiz: You have 5 test methods in a class. You want to launch the browser ONCE before all 5 tests run, and close it ONCE after all 5 finish. Which annotations do you use?
04
Execution Flow
Annotation Execution Order & Hierarchy
TestNG annotations execute in a fixed, predefined order. Understanding this order is critical — it prevents bugs caused by running teardown before setup, or setup that runs too early or too late. This is one of the most common TestNG interview topics.

Complete execution order (top = first, bottom = last):

@BeforeSuite
Runs ONCE at very beginning of entire suite
@BeforeTest
Runs before each <test> tag in testng.xml
@BeforeClass
Runs once before first @Test in the class
@BeforeMethod
Runs before every single @Test method
@Test
The actual test method runs here ⭐
@AfterMethod
Runs after every single @Test method
@AfterClass
Runs once after last @Test in the class
@AfterTest
Runs after all tests in <test> tag finish
@AfterSuite
Runs ONCE at the very end of entire suite

Real-world execution trace — what you'd see in console output:

Console output when 2 @Test methods run
// Class has: setUp(@BeforeClass), navigateTo(@BeforeMethod),
// testLogin + testSearch (@Test x2), clearCookies(@AfterMethod), tearDown(@AfterClass)

setUp()         // @BeforeClass — runs ONCE
navigateTo()    // @BeforeMethod — runs before testLogin
testLogin()     // @Test #1
clearCookies()  // @AfterMethod — runs after testLogin
navigateTo()    // @BeforeMethod — runs before testSearch
testSearch()    // @Test #2
clearCookies()  // @AfterMethod — runs after testSearch
tearDown()      // @AfterClass — runs ONCE after both tests
ℹ️
Multiple @Test methods — alphabetical order by default: If you have multiple @Test methods and don't specify priority, TestNG runs them in alphabetical order by method name. So testCheckout runs before testLogin (c before l alphabetically). To control order explicitly, use @Test(priority = 1) — covered in the next topic.
🧪 Quiz: If a class has @BeforeMethod, @Test (x3), and @AfterMethod — how many total times will @BeforeMethod and @AfterMethod each run?
05
Advanced @Test
@Test Annotation Attributes
The @Test annotation accepts several powerful attributes that customize how a test runs. These attributes are specified inside the annotation like @Test(priority = 1, groups = "smoke"). They are a key topic in every TestNG interview.

All important @Test attributes with examples:

TestAttributes.java — All major @Test attributes
public class TestAttributes {

    // ── priority: controls execution order (lower = earlier) ──────────
    @Test(priority = 1)
    public void testLogin() {
        // Runs FIRST — priority 1 < priority 2
        System.out.println("Step 1: Login");
    }

    @Test(priority = 2)
    public void testAddToCart() {
        // Runs SECOND — depends logically on login
        System.out.println("Step 2: Add to Cart");
    }

    @Test(priority = 3)
    public void testCheckout() {
        System.out.println("Step 3: Checkout");
    }

    // ── enabled: skip a test without deleting it ──────────────────────
    @Test(enabled = false)
    public void testFeatureUnderDevelopment() {
        // This test is SKIPPED — use when feature is not yet ready
        // Real-world: Feature X is in dev, test is written but not run yet
    }

    // ── timeOut: fail test if it takes too long (ms) ──────────────────
    @Test(timeOut = 5000)
    public void testPageLoadSpeed() {
        // If this test takes more than 5000ms (5 seconds), it FAILS
        // Real-world: Performance assertion — page must load under 5 seconds
        driver.get("https://example.com");
    }

    // ── invocationCount: run same test multiple times ─────────────────
    @Test(invocationCount = 3)
    public void testFlickerCheck() {
        // Runs this test 3 times — useful for detecting flaky tests
        // Real-world: If a test passes sometimes and fails other times
        System.out.println("Flicker test run: " + Thread.currentThread().getId());
    }

    // ── groups: tag test for selective execution ──────────────────────
    @Test(groups = {"smoke", "regression"})
    public void testHomePage() {
        // Belongs to BOTH groups — runs when either group is executed
        // Real-world: Quick smoke tests run on every deploy
    }

    // ── description: document your test ──────────────────────────────
    @Test(description = "Verifies user can login with valid email and password")
    public void testValidLogin() {
        // description appears in TestNG HTML report — very useful for teams
    }

    // ── expectedExceptions: test that an exception IS thrown ──────────
    @Test(expectedExceptions = ArithmeticException.class)
    public void testDivisionByZero() {
        // PASSES if ArithmeticException is thrown, FAILS if not thrown
        // Real-world: Validate that invalid input throws proper exception
        int result = 10 / 0;  // This throws ArithmeticException — test passes!
    }
}
💡
Priority default value is 0. If you don't specify priority, all tests have priority=0 and run in alphabetical order among themselves. Tests with lower priority number run first: priority=0 before priority=1 before priority=2. Negative priorities work too — @Test(priority = -1) runs before priority=0.
🧪 Quiz: You have a test for a feature that is still under development. You don't want it to run in CI/CD but you also don't want to delete the test code. What attribute do you use?
06
Validation
Assertions — Hard Assert & Soft Assert
Assertions are the verification points in your tests. They check if the actual result matches the expected result. TestNG provides two types: Hard Assert (org.testng.Assert) and Soft Assert (org.testng.asserts.SoftAssert). The difference is critical — it's one of the most asked interview questions.

Hard Assert — Stops test immediately on failure:

Hard Assert — org.testng.Assert
import org.testng.Assert;
import org.testng.annotations.Test;

public class HardAssertDemo {

    @Test
    public void testHardAssert() {
        // HARD ASSERT — fails immediately, remaining assertions are NOT checked
        Assert.assertEquals("Google", "Google");   // ✅ PASSES
        Assert.assertEquals(10, 20);               // ❌ FAILS HERE — test stops
        Assert.assertTrue(true);                   // NEVER REACHED
        // Result: Test FAILED at line 2. Line 3 not checked.
    }

    @Test
    public void testCommonAssertions() {
        // assertEquals — checks if actual equals expected
        Assert.assertEquals(actual, expected, "Failure message");

        // assertTrue — checks if condition is true
        Assert.assertTrue(driver.getTitle().contains("Google"));

        // assertFalse — checks if condition is false
        Assert.assertFalse(errorMsg.isDisplayed(), "Error should not be visible");

        // assertNotNull — checks that object is not null
        Assert.assertNotNull(driver.findElement(By.id("logo")));

        // assertNull — checks that object IS null
        Assert.assertNull(errorElement);

        // fail — explicitly fail a test with a message
        Assert.fail("Feature not implemented yet");
    }
}

Soft Assert — Continues test even after failure, reports all failures at end:

Soft Assert — org.testng.asserts.SoftAssert
import org.testng.asserts.SoftAssert;
import org.testng.annotations.Test;

public class SoftAssertDemo {

    @Test
    public void testProfilePage() {
        // SOFT ASSERT — collects all failures, reports them ALL at end

        // IMPORTANT: Create new SoftAssert object for each test method
        SoftAssert softAssert = new SoftAssert();

        // Real-world: Verify all fields on a user profile page
        softAssert.assertEquals(getUsername(), "priya");     // ✅ Passes — continues
        softAssert.assertEquals(getEmail(), "wrong@email.com"); // ❌ Fails — continues!
        softAssert.assertTrue(isProfilePicVisible());          // ✅ Passes — continues
        softAssert.assertEquals(getPhone(), "wrong-phone");   // ❌ Fails — continues!

        // CRITICAL: Must call assertAll() — this triggers the actual failure report
        // Without assertAll(), test PASSES even with failed assertions!
        softAssert.assertAll();
        // Result: Test FAILED — shows BOTH failures (email + phone) in one report
    }
}
Hard Assert — use when?
Use when the test CANNOT continue if this step fails. Example: If login fails, there's no point checking the dashboard. Hard assert on login so the test stops and doesn't waste time.
Soft Assert — use when?
Use when you want to check MULTIPLE things on a page and need ALL failures reported. Example: Verifying all fields on a profile page — even if email is wrong, you still want to check phone, address, etc.
Critical rule
Always call softAssert.assertAll() at the end. Without it, the test passes even if assertions failed — your test suite silently lies!
🧪 Quiz: You are verifying a user profile page that has 6 fields (name, email, phone, address, city, pincode). You want to verify all 6 and see ALL failures in one test run — not stop at the first failure. Which assert type do you use?
07
Configuration File
testng.xml — Suite Configuration
testng.xml is the master configuration file for TestNG. It defines which tests to run, in what order, with which parameters, and how many threads to use. Without testng.xml, you can only run individual test classes. With it, you control the entire test suite from one file — and point Jenkins/Maven to it for CI/CD.

Complete testng.xml example — all major elements explained:

testng.xml — Full suite configuration
<!-- DOCTYPE tells TestNG where to find the schema -->
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">

<suite name="E-Commerce Test Suite"
       parallel="methods"  <!-- parallel: none | methods | classes | tests -->
       thread-count="3">   <!-- 3 threads run simultaneously -->

    <!-- Suite-level parameters — available to ALL tests -->
    <parameter name="browser" value="chrome"/>
    <parameter name="baseUrl" value="https://shop.example.com"/>

    <!-- Test Group 1: Smoke Tests — run on every deployment -->
    <test name="Smoke Tests">
        <groups>
            <run>
                <include name="smoke"/>   <!-- Only run @Test(groups = "smoke") -->
            </run>
        </groups>
        <classes>
            <class name="com.tests.LoginTest"/>
            <class name="com.tests.HomePageTest"/>
        </classes>
    </test>

    <!-- Test Group 2: Regression Tests — run nightly -->
    <test name="Regression Tests">
        <classes>
            <class name="com.tests.LoginTest">
                <!-- Run only specific methods from this class -->
                <methods>
                    <include name="testValidLogin"/>
                    <include name="testInvalidLogin"/>
                    <exclude name="testRememberMe"/>  <!-- skip this one -->
                </methods>
            </class>
            <class name="com.tests.CartTest"/>
            <class name="com.tests.CheckoutTest"/>
            <class name="com.tests.PaymentTest"/>
        </classes>
    </test>

</suite>

Using @Parameters to receive XML values in test methods:

Using @Parameters annotation in test code
import org.testng.annotations.Parameters;
import org.testng.annotations.Test;

public class CrossBrowserTest {

    // @Parameters reads value from testng.xml <parameter> tag
    @Parameters("browser")
    @BeforeClass
    public void setUp(String browser) {
        // browser = "chrome" — value comes from testng.xml
        if (browser.equals("chrome")) {
            driver = new ChromeDriver();
        } else if (browser.equals("firefox")) {
            driver = new FirefoxDriver();
        }
    }

    // @Optional provides a default if parameter not in XML
    @Parameters("environment")
    @Test
    public void testLoginPage(@Optional("staging") String env) {
        // If "environment" not in XML, defaults to "staging"
        System.out.println("Running on: " + env);
    }
}
💡
Where is testng.xml placed? In a Maven project, place testng.xml at the root of your project (same level as pom.xml). Then in pom.xml's Surefire plugin, reference it with <suiteXmlFile>testng.xml</suiteXmlFile>. When Jenkins runs mvn test, Maven automatically picks up testng.xml and runs the entire suite.
🧪 Quiz: In testng.xml, what is the difference between <include> and <exclude> inside <methods>?
08
Advanced Control
Groups & Dependencies
Groups allow you to tag tests with labels (like "smoke", "regression", "payment") and run only specific groups. Dependencies (dependsOnMethods / dependsOnGroups) let you define that Test B should only run if Test A passed first — critical for end-to-end flows where later steps depend on earlier ones.

Groups — tag tests for selective execution:

Groups.java — Smoke, Regression, Payment groups
public class ECommerceTests {

    // ── SMOKE tests: run on every code deploy (fast, critical only) ───
    @Test(groups = "smoke")
    public void testHomePageLoads() { /* ... */ }

    @Test(groups = "smoke")
    public void testLoginPageLoads() { /* ... */ }

    // ── REGRESSION: full suite — run nightly ──────────────────────────
    @Test(groups = {"smoke", "regression"})  // belongs to BOTH groups
    public void testUserLogin() { /* ... */ }

    @Test(groups = "regression")
    public void testForgotPassword() { /* ... */ }

    // ── PAYMENT: run only before payment module changes ───────────────
    @Test(groups = {"regression", "payment"})
    public void testCreditCardPayment() { /* ... */ }

    @Test(groups = "payment")
    public void testUPIPayment() { /* ... */ }
}

Dependencies — define test execution order based on logic:

Dependencies — dependsOnMethods & dependsOnGroups
public class OrderFlowTest {

    // STEP 1: Login — no dependency
    @Test(groups = "flow")
    public void testLogin() {
        System.out.println("Logging in...");
        // If this FAILS → testAddToCart is SKIPPED (hard dependency)
    }

    // STEP 2: Runs only AFTER testLogin SUCCEEDS
    @Test(dependsOnMethods = {"testLogin"}, groups = "flow")
    public void testAddToCart() {
        System.out.println("Adding product to cart...");
        // If testLogin failed → this is SKIPPED automatically
    }

    // STEP 3: Depends on multiple methods
    @Test(dependsOnMethods = {"testLogin", "testAddToCart"})
    public void testCheckout() {
        System.out.println("Checking out...");
        // Runs only if BOTH testLogin AND testAddToCart passed
    }

    // ── Soft dependency — always runs, even if dependency failed ──────
    @Test(dependsOnMethods = {"testLogin"}, alwaysRun = true)
    public void testLogout() {
        // alwaysRun=true = soft dependency
        // Runs after testLogin regardless of pass or fail
        // Real-world: Cleanup/logout must always happen
        System.out.println("Logging out...");
    }

    // ── dependsOnGroups — depend on an entire group ───────────────────
    @Test(dependsOnGroups = {"flow"})
    public void testOrderHistory() {
        // Runs only after ALL tests in "flow" group pass
        System.out.println("Checking order history...");
    }
}
ℹ️
Hard vs Soft dependency: By default (without alwaysRun = true), dependencies are hard — if the dependency method fails, the dependent test is SKIPPED and shown as "SKIP" in the report (not FAIL). With alwaysRun = true, the test always runs regardless — this is called a soft dependency.
🧪 Quiz: Test B has @Test(dependsOnMethods = "testA"). testA fails. What happens to Test B?
09
Data-Driven Testing
@DataProvider & Parameterization
@DataProvider is TestNG's mechanism for data-driven testing. It allows you to run the same test method multiple times with different sets of data. The DataProvider method must return Object[][] (2D array) — each inner array is one set of parameters for one test run. This eliminates code duplication when testing multiple scenarios.
🧠 Real-World Analogy: Imagine a login form. You want to test it with 5 different username/password combinations. Without DataProvider, you'd write 5 separate @Test methods with repeated code. With DataProvider, you write the test ONCE and provide a 2D array of 5 rows. TestNG runs the test 5 times automatically, once per row.

Basic DataProvider — login test with multiple users:

DataProviderDemo.java — Login test with multiple data sets
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

public class LoginDataDrivenTest {

    // ── Define the DataProvider ───────────────────────────────────────
    @DataProvider(name = "loginCredentials")
    public Object[][] getLoginData() {
        // Object[][] = 2D array
        // Each inner array = one set of parameters for one test run
        // Row 1: valid user | Row 2: wrong pwd | Row 3: empty | etc.
        return new Object[][] {
            { "priya@gmail.com",   "Pass@123",  "Dashboard" },  // valid
            { "priya@gmail.com",   "wrongpass", "Invalid credentials" }, // wrong pwd
            { "nouser@gmail.com",  "Pass@123",  "User not found" },  // unknown user
            { "",                   "",           "Email is required" }, // empty
            { "admin@company.com", "Admin@456", "Admin Panel" }   // admin user
        };
    }

    // ── Use the DataProvider in @Test ─────────────────────────────────
    @Test(dataProvider = "loginCredentials")
    public void testLogin(String email, String password, String expectedText) {
        // This method runs 5 times — once per row in the DataProvider
        System.out.println("Testing with: " + email + " / " + password);

        driver.findElement(By.id("email")).sendKeys(email);
        driver.findElement(By.id("password")).sendKeys(password);
        driver.findElement(By.id("loginBtn")).click();

        // Verify expected text appears on page
        Assert.assertTrue(
            driver.getPageSource().contains(expectedText),
            "Expected: " + expectedText + " for user: " + email
        );
    }
}

DataProvider in a separate class — cleaner project structure:

DataProvider in separate class — professional approach
// ── File 1: TestData.java (separate data class) ───────────────────
public class TestData {

    @DataProvider(name = "registrationData")
    public static Object[][] getRegistrationData() {
        return new Object[][] {
            { "Priya Sharma",   "priya@gmail.com",  "9876543210", "valid"   },
            { "Rahul Mehta",    "rahul@yahoo.com",  "9988776655", "valid"   },
            { "",               "invalid-email",    "1234",       "invalid" }
        };
    }
}

// ── File 2: RegistrationTest.java — use DataProvider from other class
public class RegistrationTest {

    // dataProviderClass points to the class containing the DataProvider
    @Test(dataProvider = "registrationData",
          dataProviderClass = TestData.class)
    public void testRegistration(String name, String email,
                                   String phone, String type) {
        System.out.println("Registering: " + name + " (" + type + ")");
        // Real-world: test registration form with multiple user profiles
    }
}
💡
TestNG report shows individual rows: In the HTML report, when DataProvider runs 5 rows, TestNG shows 5 separate test results — not 1. So if row 3 fails and rows 1,2,4,5 pass, you see "4 passed, 1 failed" with the exact parameters that failed clearly shown. This makes debugging data-driven failures very easy.
🧪 Quiz: A @DataProvider returns Object[][] with 4 rows. The @Test method that uses this DataProvider — how many times does it run?
10
Performance
Parallel Execution in TestNG
Parallel execution in TestNG means running multiple tests at the same time (simultaneously) in separate threads. This dramatically reduces total test execution time. A suite that takes 30 minutes sequentially might take only 10 minutes with 3 parallel threads. TestNG supports parallel execution at 4 levels, configurable in testng.xml.
🧠 Real-World Example: A team has 4 browsers to test on (Chrome, Firefox, Edge, Safari). Sequential execution: Test all browsers one by one → 40 minutes. Parallel execution: Run all 4 browsers simultaneously → 10 minutes. Same coverage, 4x faster. This is why every company doing cross-browser testing uses TestNG parallel execution.

4 parallel modes in testng.xml:

parallel="methods"
Each @Test method runs in its own thread. Multiple methods from the same class run simultaneously. Most granular level.
parallel="classes"
Each test class runs in its own thread. All methods within a class still run sequentially, but different classes run simultaneously.
parallel="tests"
Each <test> tag in testng.xml runs in its own thread. Most common for cross-browser testing — one <test> per browser.
parallel="instances"
Tests in different instances of the same class run in parallel. Used with @Factory annotation.

Cross-browser parallel testing — the most common real-world use:

testng.xml — Cross-browser parallel execution
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<!-- parallel="tests" + thread-count="3" = 3 browsers run simultaneously -->
<suite name="Cross Browser Suite" parallel="tests" thread-count="3">

    <!-- Chrome tests -->
    <test name="Chrome Tests">
        <parameter name="browser" value="chrome"/>
        <classes><class name="com.tests.LoginTest"/></classes>
    </test>

    <!-- Firefox tests — runs in parallel with Chrome -->
    <test name="Firefox Tests">
        <parameter name="browser" value="firefox"/>
        <classes><class name="com.tests.LoginTest"/></classes>
    </test>

    <!-- Edge tests — also runs in parallel -->
    <test name="Edge Tests">
        <parameter name="browser" value="edge"/>
        <classes><class name="com.tests.LoginTest"/></classes>
    </test>

</suite>

Thread-safe WebDriver — critical for parallel execution:

Thread-safe WebDriver using ThreadLocal
public class BaseTest {

    // ThreadLocal ensures each thread has its OWN WebDriver instance
    // Without ThreadLocal, parallel threads would share one browser — BUGS!
    private static ThreadLocal<WebDriver> driverThread = new ThreadLocal<>();

    @Parameters("browser")
    @BeforeMethod
    public void setUp(String browser) {
        WebDriver driver;
        if (browser.equals("chrome")) {
            driver = new ChromeDriver();
        } else if (browser.equals("firefox")) {
            driver = new FirefoxDriver();
        } else {
            driver = new EdgeDriver();
        }
        driverThread.set(driver);  // store in current thread's slot
    }

    // All test classes call getDriver() — each thread gets its OWN driver
    public WebDriver getDriver() {
        return driverThread.get();
    }

    @AfterMethod
    public void tearDown() {
        if (getDriver() != null) {
            getDriver().quit();
            driverThread.remove();  // prevent memory leaks
        }
    }
}
Common parallel execution mistake — shared WebDriver: Never use a single static WebDriver instance for parallel tests. If Thread 1 and Thread 2 both use the same static WebDriver driver, they fight over the same browser window. Always use ThreadLocal<WebDriver> to give each thread its own driver instance. This is the #1 bug in parallel Selenium setups.
🧪 Quiz: In testng.xml, you set parallel="methods" and thread-count="5". What does this mean?
11
Advanced Features
Listeners & IRetryAnalyzer
Listeners in TestNG are interfaces that "listen" to test events (start, pass, fail, skip) and let you execute custom code when those events happen. The most commonly used listener is ITestListener. IRetryAnalyzer is a special interface that lets you automatically retry failed tests — essential for handling flaky tests in CI/CD.

ITestListener — screenshot on failure (most common real-world use):

TestListener.java — Auto screenshot on failure
import org.testng.ITestListener;
import org.testng.ITestResult;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.OutputType;

public class TestListener implements ITestListener {

    // Called when a test STARTS
    @Override
    public void onTestStart(ITestResult result) {
        System.out.println("▶ STARTED: " + result.getName());
    }

    // Called when a test PASSES
    @Override
    public void onTestSuccess(ITestResult result) {
        System.out.println("✅ PASSED: " + result.getName());
    }

    // Called when a test FAILS — take screenshot automatically!
    @Override
    public void onTestFailure(ITestResult result) {
        System.out.println("❌ FAILED: " + result.getName());

        // Real-world: Get driver and take screenshot automatically
        Object testInstance = result.getInstance();
        if (testInstance instanceof BaseTest) {
            WebDriver driver = ((BaseTest) testInstance).getDriver();
            File screenshot = ((TakesScreenshot) driver)
                .getScreenshotAs(OutputType.FILE);
            // Copy screenshot to test-output/screenshots/ folder
            FileUtils.copyFile(screenshot,
                new File("screenshots/" + result.getName() + ".png"));
        }
    }

    // Called when a test is SKIPPED
    @Override
    public void onTestSkipped(ITestResult result) {
        System.out.println("⏭ SKIPPED: " + result.getName());
    }
}

// ── Attach listener to test class ────────────────────────────────
// Method 1: @Listeners annotation on class
@Listeners(TestListener.class)
public class LoginTest { /* ... */ }

// Method 2: In testng.xml (applies to ALL classes in suite)
//  <listeners>
//    <listener class-name="com.listeners.TestListener"/>
//  </listeners>

IRetryAnalyzer — automatically retry failed tests:

RetryAnalyzer.java — Retry failed tests up to 3 times
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;

public class RetryAnalyzer implements IRetryAnalyzer {

    private int retryCount = 0;
    private static final int MAX_RETRY = 3;  // retry max 3 times

    @Override
    public boolean retry(ITestResult result) {
        // TestNG calls this after each failure
        // return true  = retry the test
        // return false = don't retry, mark as FAILED
        if (retryCount < MAX_RETRY) {
            retryCount++;
            System.out.println("Retrying: " + result.getName()
                + " | Attempt: " + retryCount + "/" + MAX_RETRY);
            return true;  // retry
        }
        return false;  // stop retrying
    }
}

// ── Attach RetryAnalyzer to specific tests ────────────────────────
public class FlakyTest {

    // This test will retry up to 3 times if it fails
    @Test(retryAnalyzer = RetryAnalyzer.class)
    public void testNetworkDependentFeature() {
        // Real-world: Network calls sometimes fail due to latency
        // RetryAnalyzer retries automatically — no manual re-run needed
        Assert.assertTrue(callExternalAPI());
    }
}
💡
When to use IRetryAnalyzer: Use it for tests that are genuinely "flaky" — they occasionally fail due to network timeouts, slow servers, or timing issues — NOT due to actual bugs. Don't use it to mask real test failures. A well-designed test suite should have very few flaky tests; if many tests need retrying, there's likely a deeper architectural problem (missing waits, poor synchronization).
🧪 Quiz: What is the purpose of ITestListener's onTestFailure() method in real-world Selenium projects?
12
Professional Standards
TestNG Best Practices

Follow these professional best practices to write maintainable, reliable, and scalable TestNG test suites used in real-world Selenium automation projects.

  • 1
    Always use @BeforeClass / @AfterClass for browser setup/teardown — never @BeforeSuite Unless you truly need one browser for the entire suite, use @BeforeClass. @BeforeSuite creates one browser for ALL tests — if one test leaves the browser in a bad state, all subsequent tests are affected. @BeforeClass gives each class its own fresh browser instance.
  • 2
    Use @BeforeMethod to navigate to a known starting point Before each @Test, navigate the browser to the page you're testing. This ensures tests are truly independent — a failing test cannot affect the next test by leaving the browser on a wrong page.
  • 3
    Always call driver.quit() — not driver.close() — in @AfterClass driver.close() closes only the current window. driver.quit() closes ALL windows and terminates the WebDriver process. Not calling quit() causes memory leaks with dozens of orphaned chromedriver.exe processes on the CI server.
  • 4
    Use ThreadLocal<WebDriver> for parallel execution Never use a static WebDriver when running parallel tests. Each thread must have its own WebDriver stored in a ThreadLocal. A BaseTest class that all test classes extend — containing the ThreadLocal WebDriver setup — is the standard architecture in every professional Selenium framework.
  • 5
    Prefer @Test(groups) over copying tests If the same test needs to run in smoke AND regression, add groups = {"smoke", "regression"} — don't copy the test method. Control which groups run via testng.xml. This keeps your code DRY (Don't Repeat Yourself).
  • 6
    Use @DataProvider for all data-driven scenarios Never hardcode test data inside @Test methods. Use @DataProvider so test data can be changed without modifying test logic. For large data sets, read data from an Excel file using Apache POI into @DataProvider — this is the industry standard approach.
  • 7
    Use Soft Assertions for UI page verification When verifying multiple elements on a single page (profile page, order summary, etc.), use SoftAssert so ALL failures are reported in one test run. And always call softAssert.assertAll() at the end — without it, the test silently passes even with failed assertions.
  • 8
    Attach ITestListener via testng.xml — not @Listeners If you add @Listeners on every test class, you'll forget it on some classes. Adding the listener in testng.xml ensures it applies to ALL test classes in the suite automatically — no class-level annotation needed.
  • 9
    Name test methods descriptively — they appear in reports testLogin() is vague. testValidLoginRedirectsToDashboard() or testInvalidPasswordShowsErrorMessage() are clear. When a test fails in Jenkins, the method name is the first thing seen — make it self-explanatory.
  • 10
    Always re-run testng-failed.xml after failures After a test run, TestNG automatically generates test-output/testng-failed.xml. This file contains ONLY the failed tests. In CI/CD, run this file as a second pass to confirm whether failures are real bugs or transient issues. This is standard in every professional QA team.
🎯
Interview-ready answer — "What are the advantages of TestNG over JUnit?":

"TestNG has several advantages over JUnit. First, TestNG supports multiple levels of setup/teardown annotations — @BeforeSuite, @BeforeTest, @BeforeClass, @BeforeMethod and their After counterparts — giving fine-grained control over test lifecycle. JUnit has only @Before and @After (equivalent to @BeforeMethod/@AfterMethod). Second, TestNG has built-in parallel execution support via testng.xml, critical for cross-browser testing with Selenium Grid. JUnit 4 had no native parallel support. Third, TestNG's @DataProvider makes data-driven testing elegant — running the same test with multiple inputs without code duplication. Fourth, TestNG's grouping feature lets teams tag tests as 'smoke' or 'regression' and run specific groups in CI/CD. Fifth, TestNG generates detailed HTML reports automatically with pass/fail/skip counts and execution times. And finally, TestNG's dependsOnMethods attribute allows defining test execution order based on business logic — essential for E2E flow tests like login → addToCart → checkout."
🧪 Final Quiz: After a TestNG run where 8 tests passed and 3 failed, you want to re-run only the 3 failed tests. What do you do?

Ready to Master TestNG in Real Projects?

STAD Solution's QA Automation course covers TestNG with Selenium — complete framework design, real projects, CI/CD integration, and 100% placement support.

Explore Courses at STAD Solution →