Topic 1 of 120%
🧪 Complete Tutorial

Manual Testing
A to Z Guide

Master manual testing from scratch — concepts, types, test case writing, techniques and best practices. Simple language, real examples.

⏱️ ~2.5 hrs 🎯 12 Topics 🧪 Quiz each section
01
Introduction
What is Manual Testing?
Manual Testing is the process of testing a software application by hand — without using any automation tools. A human tester personally executes test cases, interacts with the application, and compares actual behaviour to expected behaviour to find defects.

In manual testing, the tester acts like a real user. They click buttons, fill forms, navigate screens, and verify that everything works as it should — based on the requirements document or user story.

🧠 Simple analogy: Imagine a new restaurant has opened. Before serving customers, the chef (developer) makes a dish. The food taster (manual QA tester) personally eats the dish — checks if it tastes right, is cooked properly, and matches what was ordered. The food taster doesn't use any machine — they use their own senses and judgement. That's manual testing.

Why manual testing is still essential:

  • Human judgement A human can notice that a button "looks off", a screen "feels confusing", or an error message "doesn't make sense" — things that no automated script can detect.
  • Exploratory testing Testers can think creatively, test unusual scenarios, and explore the application freely — uncovering edge case bugs that were never anticipated.
  • UI/UX validation Checking if the layout, design, fonts, colours, and spacing match the approved design mockup — only a human eye can do this properly.
  • New features in early stages When a feature is being built for the first time, manual testing is faster to start with before investing time in automation scripts.
  • UAT (User Acceptance Testing) Real users or business stakeholders test the product before go-live — this is always manual, never automated.
⚠️
Manual vs Automation: Manual testing is not outdated or inferior. Even in companies with full automation, 30–40% of testing remains manual — exploratory, UI/UX, and UAT specifically. Automation handles repetitive regression tests; manual handles intelligence and judgement.
🧪 Quiz: Which type of testing CANNOT be fully automated and always requires a human?
02
Foundation
SDLC and Where Testing Fits
SDLC (Software Development Life Cycle) is the step-by-step process a software team follows to plan, build, test, and release software. Testing is not a separate afterthought — it is a dedicated phase in the SDLC and also happens throughout the entire cycle.

The 6 phases of SDLC:

Planning
Requirement Analysis
Design
Development
Testing
Deployment
Planning
Project scope, timeline, and resources are decided. QA team is introduced. Test plan preparation begins.
Requirement Analysis
Business Analysts document what the software must do (BRD/FRS). QA reviews requirements to identify gaps or ambiguities early.
Design
System architecture and UI mockups are created. QA starts identifying testable scenarios and prepares test cases.
Development
Developers write code. QA finalises test cases and sets up test environment. Unit testing happens here by developers.
Testing
QA executes test cases — functional, integration, regression, UAT. Bugs are logged in JIRA. Dev fixes bugs. QA retests.
Deployment
Approved build is released to production. QA does smoke testing on production after deployment to confirm all is working.
💡
"Shift Left Testing" is a modern approach where QA gets involved from the Requirement and Design phases — catching defects before a single line of code is written. This saves huge time and cost. Early bug = cheap fix. Late bug = expensive fix.
🧪 Quiz: In which SDLC phase does QA review requirements to catch gaps and ambiguities early?
03
Core Concept
Black Box vs White Box vs Grey Box Testing
These terms describe how much the tester knows about the internal code of the application they are testing. This is one of the most fundamental concepts in software testing.
⬛ Black Box Testing
Code knowledge:None — tester doesn't see any code
Focus:Input → Output behaviour. Does it work as expected?
Who does it:Manual QA testers, business users
Example:Testing the login page — enter email/password, verify dashboard appears
⬜ White Box Testing
Code knowledge:Full — tester can read the source code
Focus:Internal logic, code paths, conditions, loops
Who does it:Developers, SDET engineers
Example:Testing that the interest calculation formula in code handles all conditions correctly
🔲 Grey Box Testing
Code knowledge:Partial — knows some internal details (DB, APIs)
Focus:Mix of functional testing + internal validation
Who does it:Senior QA / automation engineers
Example:Testing an API response AND also checking the database record was created correctly
TypeCode KnowledgeFocusWho PerformsExample
⬛ Black BoxNone — no code visibilityInput → Output behaviourManual QA testers, business usersTesting login page with valid/invalid credentials
⬜ White BoxFull — reads source codeInternal logic, code pathsDevelopers, SDET engineersTesting interest calculation formula logic in code
🔲 Grey BoxPartial — knows some internalsFunctional + internal validationSenior QA / automation engineersChecking API response AND verifying DB record
ℹ️
Manual QA testers primarily do Black Box Testing. They test what the user sees and experiences — they don't read or understand the code. This is why black box testing is also called "specification-based testing" — the tester works from the requirements specification only.
🧪 Quiz: A QA engineer tests the "Forgot Password" feature by entering an email and verifying they receive a reset link — without looking at any code. This is called?
04
Testing Types
Functional Testing — What the Software DOES
Functional Testing verifies that every feature and function of the software works according to the specified requirements. It checks WHAT the software does — does clicking "Add to Cart" actually add the item? Does the login page accept valid credentials and reject invalid ones?
🧠 Analogy: You order a refrigerator. Functional testing checks: Does it cool? Does the door seal work? Does the light turn on when you open it? Can you adjust the temperature? Each listed feature is verified individually. It tests what the product is supposed to DO.

Types of Functional Testing:

  • 1
    Unit Testing Testing the smallest individual unit/module of code in isolation. Usually done by developers, not QA.
    Example: Testing only the "calculate discount" function — giving it different inputs and checking its output.
  • 2
    Integration Testing Testing how multiple modules or components work together. Checks the communication and data flow between units.
    Example: After login module is integrated with the profile module, verify that logging in correctly loads the user's profile data.
  • 3
    System Testing Testing the entire, fully integrated system against the requirements. Done by the QA team in the testing environment. This is the most important QA phase.
    Example: Testing the entire e-commerce app end-to-end — register, login, browse products, add to cart, checkout, payment, order confirmation.
  • 4
    User Acceptance Testing (UAT) Final testing done by actual end users or business clients to confirm the software meets real-world requirements before go-live. Done in the staging/pre-production environment.
    Example: The bank's business team tests the new loan application feature to confirm it matches their defined business rules before release to customers.
📱 Real Example — Login Feature Functional Tests
✓ Enter valid email + valid password → User logs in and lands on dashboard
✓ Enter valid email + wrong password → Error "Invalid credentials" shown
✓ Leave email field empty → Error "Email is required" shown
✓ Enter unregistered email → Error "Account not found" shown
✓ Click "Remember me" → Next visit auto-fills credentials
✓ "Forgot password" link navigates to reset password page
🧪 Quiz: Testing whether the complete e-commerce app works end-to-end (login → browse → checkout) is called?
05
Testing Types
Non-Functional Testing — HOW the Software Performs
Non-Functional Testing tests aspects of the software that are not related to specific features or functions — it tests HOW WELL the software works. This includes performance, speed, security, compatibility, and usability.
🧠 Same refrigerator analogy: Functional testing checked if it cools. Non-functional testing checks: How fast does it cool to 4°C? (Performance) | Can it handle electricity fluctuations safely? (Reliability) | Is it easy for a 70-year-old to operate? (Usability) | Does it fit in a 2BHK kitchen? (Compatibility)

Key types of Non-Functional Testing:

  • 1
    Performance Testing Tests how the system behaves under a specific workload — response time, speed, stability, and scalability.
    Example: Does the product search page load in under 2 seconds when 1,000 users are browsing simultaneously?
  • 2
    Load Testing Tests how the system behaves under a normal and expected load. Validates that the system can handle the anticipated number of users.
    Example: Testing that the app handles 500 concurrent users without slowing down — this is the expected peak load.
  • 3
    Stress Testing Tests the system beyond its normal operating capacity to find its breaking point — what happens when the system is overloaded?
    Example: Gradually increasing users from 500 to 5,000 to find at what point the server crashes or starts giving errors.
  • 4
    Usability Testing Tests how easy and intuitive the application is to use for real users. Measures user satisfaction, ease of navigation, and clarity.
    Example: Asking 5 non-technical users to complete a purchase on the app and observing where they get confused or stuck.
  • 5
    Security Testing Tests that the application is protected from unauthorised access, data breaches, SQL injection, XSS, and other attacks.
    Example: Checking that entering admin' OR '1'='1 in the login field does NOT bypass authentication (SQL injection test).
  • 6
    Compatibility Testing Tests that the software works correctly across different browsers, operating systems, screen sizes, and devices.
    Example: Testing the web app on Chrome, Firefox, Safari, and Edge — and also on Android and iOS mobile phones.
💡
Functional vs Non-Functional — one-line summary:
Functional = Does it WORK correctly? (Login button logs you in ✓)
Non-Functional = Does it work WELL? (Login page loads in <1 second ✓ | Works on all browsers ✓)
🧪 Quiz: Testing the app on Chrome, Firefox, Safari, and Edge browsers to make sure it works on all of them is called?
06
Must-Know Testing Types
Smoke, Sanity & Regression Testing

These three are the most frequently asked about in QA interviews. Each serves a distinct purpose and is done at a different stage.

🔴 Smoke Testing — "Is the build stable enough to test?"
What it is: A quick, high-level check of the most critical functions of a new build to verify it is stable enough for further testing. If smoke testing fails, the entire build is rejected and sent back to developers — QA does not waste time testing a broken build.

When: Every time a new build is received from developers.
Scope: Broad — covers the entire application, but only at a surface level.
Duration: 15–30 minutes. Quick and shallow.
Done by: Developers or QA team.

Real example: New build arrives for an e-commerce app. Smoke tests: Can you open the app? Can you log in? Does the home page load? Can you search for a product? If any of these fail → build rejected.
🟡 Sanity Testing — "Is the specific fix/change working correctly?"
What it is: A narrow, focused check done after a minor bug fix or small code change. It verifies that the specific fix works correctly AND hasn't broken the immediately surrounding functionality. It is a subset of regression testing.

When: After a specific bug fix is received for retesting.
Scope: Narrow — only the fixed area and related modules.
Duration: Quick check, not an exhaustive test.
Done by: QA team only (not documented as formally as other test types).

Real example: Developer fixed the "OTP not sending" bug. Sanity test: Verify OTP works now. Also check that login, registration, and password reset nearby features weren't broken.
🟢 Regression Testing — "Did the new changes break any existing features?"
What it is: A comprehensive re-execution of existing test cases to verify that new code changes, bug fixes, or new features have not accidentally broken previously working functionality. This is the broadest and most thorough of the three.

When: After every new build, every bug fix, or before a major release.
Scope: Full application — all existing features are retested.
Duration: Can take hours to days. Often automated for speed.
Done by: QA team only.

Real example: Developer adds a new "Wishlist" feature to the e-commerce app. Regression testing: Re-run all test cases for Login, Cart, Checkout, Payment, Profile, Search — to make sure adding "Wishlist" didn't break anything that was previously working.
Quick Comparison
Smoke:New build → broad, shallow, quick — "does it start?"
Sanity:After fix → narrow, targeted — "is this fix OK?"
Regression:After any change → full app — "did anything break?"
SmokeSanityRegression
When doneAfter new buildAfter specific fixAfter any code change
ScopeEntire app, surface levelSpecific module onlyEntire app, in depth
SpeedVery fast (15–30 min)Fast (targeted)Slow (hours to days)
GoalBuild stable for testing?Specific fix working?Did new change break anything?
Done byDev or QAQA onlyQA only
ℹ️
Memory trick: Think of a funnel 🔽
Smoke = wide opening (broad, quick scan of whole app)
Sanity = middle (narrowed to specific changes)
Regression = bottom (deep, full coverage)
🧪 Quiz: Developer fixes the "Payment gateway timeout" bug. QA retests ONLY the payment module and nearby checkout flow to verify the fix. This is called?
07
Core Skill
How to Write a Test Case
A Test Case is a documented set of steps, inputs, and expected results that tells a tester exactly how to test a specific functionality. It is the most fundamental deliverable of a manual QA engineer.
🧠 Analogy: A test case is like a recipe. A recipe tells you: what ingredients you need (pre-conditions), exactly what steps to follow (test steps), and what the final dish should look and taste like (expected result). If the dish doesn't match, you've found a bug.

Mandatory components of every test case:

Test Case ID
A unique identifier for the test case. Format: TC_[Module]_[Number] — e.g. TC_LOGIN_001. Used to reference and track the test case.
Test Case Title
A short, clear description of what is being tested. Format: Verb + what + condition. E.g. "Verify login with valid email and password"
Module / Feature
Which part of the application is being tested. E.g. Login, Registration, Checkout, Profile.
Pre-conditions
What must be set up or true BEFORE the test can be executed. E.g. "User must have a registered account" or "User must be on the Login page".
Test Steps
Numbered, sequential, specific actions the tester must perform. Use action verbs: Click, Enter, Select, Navigate, Verify. Each step = one action only.
Test Data
The exact input values to use during the test. E.g. Email: priya@example.com, Password: Test@123. Never leave this vague.
Expected Result
The exact, specific outcome that SHOULD happen if the feature works correctly. Be precise — "User is redirected to /dashboard" is better than "User logs in".
Actual Result
What ACTUALLY happened when the tester ran the test. Filled in during execution, not while writing.
Status
Pass / Fail / Blocked / Skipped — filled in after execution.
Priority
High / Medium / Low — how critical is this test case to execute? High priority = must test every sprint.

Example of a complete, well-written test case:

🧪 TC_LOGIN_001 — Verify login with valid credentials
Test Case ID
TC_LOGIN_001
Title
Verify that a registered user can successfully log in using valid email and password
Module
Login / Authentication
Priority
High
Pre-conditions
1. User has a registered account with email: priya@example.com
2. User is on the Login page (URL: /login)
3. User is currently logged out
Test Data
Email: priya@example.com | Password: Test@123
Test Steps
1. Navigate to the login page
2. Enter email: priya@example.com in the Email field
3. Enter password: Test@123 in the Password field
4. Click the "Login" button
Expected Result
User is successfully logged in and redirected to the Dashboard page (URL: /dashboard). User's name "Priya" is displayed in the top-right corner.
Actual Result
[To be filled during execution]
Status
Pass
⚠️
Common test case writing mistakes to avoid:
❌ Vague steps: "Fill in the login form" → ✅ "Enter email in Email field"
❌ Multiple actions per step: "Enter email, password and click login" → ✅ Three separate steps
❌ Vague expected result: "Login works" → ✅ "User redirected to /dashboard, name shown in header"
❌ No test data: "Enter valid email" → ✅ "Enter: priya@example.com"
🧪 Quiz: What is the MOST important reason for writing specific test data (exact email and password) in a test case instead of just saying "valid credentials"?
08
Core Skill
Test Scenarios vs Test Cases
A Test Scenario is a high-level description of what to test — a "what if" question about the feature. A Test Case is the detailed, step-by-step implementation of a scenario — the how to test it. One scenario generates multiple test cases.
Test Scenario
High-level "what if". One line. No steps. E.g. "What if a user tries to log in?" — broad, conceptual.
Test Case
Detailed. Has specific steps, data, expected result. E.g. TC_LOGIN_001, TC_LOGIN_002, TC_LOGIN_003 — all derived from the one scenario above.
🔍 Example — Login Feature
Test Scenario 1: Verify login with valid credentials
→ TC_001: Valid email + valid password → login successful

Test Scenario 2: Verify login with invalid credentials
→ TC_002: Valid email + wrong password → error shown
→ TC_003: Unregistered email + any password → error shown
→ TC_004: Empty email field → "Email required" error
→ TC_005: Empty password field → "Password required" error

Test Scenario 3: Verify account lockout after multiple failed attempts
→ TC_006: 5 consecutive wrong passwords → account locked for 30 minutes

Test Scenario 4: Verify "Forgot Password" functionality
→ TC_007: Enter registered email → OTP/reset link received
→ TC_008: Enter unregistered email → appropriate error shown
💡
Interview tip: When asked "How many test cases would you write for a Login page?" — a good answer is 15–25 test cases, covering: positive cases, negative cases, boundary values, UI validation, security (SQL injection), and cross-browser tests. Always start by listing scenarios first, then derive cases from them.
🧪 Quiz: "Verify what happens when a user tries to log in with an expired password" — is this a Test Scenario or a Test Case?
09
Execution
Test Execution & Status Types
Test Execution is the process of actually running the written test cases against the software and recording the results. During execution, every test case gets a status that describes what happened.

The 4 test case execution statuses:

  • Pass PASS The actual result matches the expected result exactly. The feature is working as specified. No bug is logged.
    Example: Entering valid credentials → User is redirected to dashboard ✅ Matches expected result.
  • Fail FAIL The actual result does NOT match the expected result. A defect is found. A bug is immediately logged in JIRA with all details.
    Example: Entering valid credentials → Page shows blank screen instead of dashboard ❌ Bug logged.
  • Blocked BLOCKED The test case cannot be executed because of a dependency or another bug. A pre-condition has not been met. Not the tester's fault — blocked by an external issue.
    Example: Can't test "Add to Cart" because the Login feature itself has a critical bug and tester cannot log in at all.
  • Skipped / Not Executed SKIPPED The test case was intentionally not executed — due to time constraints, out-of-scope for current sprint, or feature not yet developed.
    Example: "Test promo code feature" was skipped because the promo code feature is not in the current sprint's scope.
⚠️
Fail vs Blocked — key difference:
Fail = You ran the test and the result was wrong (bug found).
Blocked = You COULD NOT even run the test because something else is preventing it.
📊 Test Execution Report Example
Total Test Cases: 50
PASS 34 (68%) — Feature working correctly
FAIL 8 (16%) — Bugs logged in JIRA
BLOCKED 5 (10%) — Waiting for critical bugs to be fixed
SKIPPED 3 (6%) — Out of current sprint scope
→ Test Completion Rate: 94% | Pass Rate: 81%
🧪 Quiz: A QA tester cannot test the "Order History" page because the Login feature is currently broken. What status should the "Order History" test cases get?
10
Testing Techniques
Positive & Negative Testing
Every feature must be tested with both valid (positive) and invalid (negative) inputs. This is one of the most fundamental rules of manual testing and applies to every single feature you test.
✅ Positive Testing
Goal:Verify the feature works correctly with valid, expected input
What to check:Happy path — normal user behaviour
Example:Login with correct email + correct password → should log in
❌ Negative Testing
Goal:Verify the system handles invalid, unexpected, or extreme input gracefully
What to check:Error handling, validation messages, edge cases
Example:Login with wrong password → should show "Invalid credentials" error
Positive TestingNegative Testing
GoalSystem works with valid inputSystem handles invalid input gracefully
Input typeValid, expected, normalInvalid, unexpected, out-of-range, null
What we verifyCorrect output / successProper error message / no crash
ExampleLogin with correct credentials → dashboardLogin with empty fields → validation error shown
🔍 Full Example — Registration Form (Name Field)
Positive (valid input):
→ Enter "Priya Sharma" → Accepted ✅
→ Enter "A" (single character) → Accepted ✅ (if min 1 char allowed)

Negative (invalid input):
→ Leave name field empty → "Name is required" error ✅
→ Enter "Priya123" (numbers in name) → Error shown ✅ (if only letters allowed)
→ Enter 300 characters → Error "Max 100 characters allowed" ✅
→ Enter special characters !@# → Error shown ✅
→ Enter SQL injection: '; DROP TABLE users; -- → Should NOT crash, show error ✅
💡
For every field in a form, always write at least 3–4 negative test cases: empty, too long, wrong format, special characters. This is where most real-world bugs hide — developers test the happy path, but forget to handle errors properly.
🧪 Quiz: Testing what happens when you submit a registration form with the email field left blank is an example of?
11
Testing Techniques
Boundary Value Analysis (BVA) & Equivalence Partitioning (EP)
These are two structured techniques for selecting the most effective test cases for fields that accept a range of values. They help you test maximum coverage with minimum test cases.
📐 Boundary Value Analysis (BVA)
What it is: Testing at and around the boundary edges of valid input ranges. Most bugs occur at the boundary — not in the middle. The rule is to test: Minimum, Minimum−1, Maximum, Maximum+1, and a typical middle value.

Rule: If a field accepts values from 1 to 100, test these boundary values:
0 (just below minimum) → Should FAIL (invalid)
1 (minimum boundary) → Should PASS (valid)
50 (middle value) → Should PASS (valid)
100 (maximum boundary) → Should PASS (valid)
101 (just above maximum) → Should FAIL (invalid)

Real Example — Age field (must be 18 to 60):
Test with: 17 ❌ | 18 ✅ | 39 ✅ | 60 ✅ | 61 ❌
🗂️ Equivalence Partitioning (EP)
What it is: Dividing all possible inputs into groups (partitions) where all values in a group are expected to behave the same way. Instead of testing every single value, test just ONE representative value from each partition.

Same Age field (18 to 60) — Partitions:
Partition 1 (Invalid — below range): 1 to 17 → test one value, e.g. 10
Partition 2 (Valid): 18 to 60 → test one value, e.g. 30
Partition 3 (Invalid — above range): 61 to 150 → test one value, e.g. 80

Instead of testing 150 values → test just 3 representative values. Same confidence, far less time.
💡
BVA + EP together: In practice, use EP to identify the groups, then use BVA to test the edges of each group. This combination gives maximum bug-finding power with minimum test cases.

Why boundaries matter: If age validation is coded as if age > 18 (instead of >=), then age = 18 will incorrectly fail. BVA catches this boundary condition bug perfectly.
🧪 Quiz: A password field must be 8 to 20 characters. Using BVA, which values should you test?
12
Pro Tips
Manual Testing Best Practices

These practices separate a professional QA engineer from a beginner.

  1. 1
    Always read the requirements before writing test cases Understand the feature fully — what is the expected behaviour? What are the edge cases? Discuss any ambiguities with the developer or BA before testing begins. Never test blindly.
  2. 2
    Test both positive AND negative scenarios for every feature Never just test the happy path. For every form field, test empty, too long, wrong format, special characters. Most production bugs come from untested negative scenarios.
  3. 3
    Write test cases before testing starts (Test-First approach) Writing test cases forces you to think through all scenarios before touching the software. Testers who write cases first find more bugs than those who test without documentation.
  4. 4
    One test case = one thing to verify A test case should test exactly one condition. Don't combine "test login AND test logout" in one case. Separate concerns means easier debugging when something fails.
  5. 5
    Reproduce the bug 3 times before logging it Before logging a bug, reproduce it at least 3 times to confirm it's consistent. Some issues are one-time glitches, not real bugs. Logging unreproducible bugs wastes developer time.
  6. 6
    Test on multiple browsers and devices Never assume it works everywhere because it works in Chrome. Always test on at least Chrome, Firefox, and Safari. Test on both desktop and mobile views. Browser compatibility bugs are extremely common.
  7. 7
    Update test cases when requirements change If a feature is updated, update the corresponding test cases immediately. Outdated test cases are worse than no test cases — they create false confidence.
  8. 8
    Test UI against the approved design mockup (Figma/Zeplin) Compare every screen pixel by pixel against the approved design — spacing, font size, colours, alignment, hover states, error states. UI bugs are real bugs, not minor issues.
  9. 9
    Explore beyond the test cases — Exploratory Testing After executing written test cases, spend time freely exploring the feature. Try unusual combinations, fast clicks, slow internet, unexpected inputs. The most interesting bugs are found in exploration.
  10. 10
    Keep test case IDs and JIRA bug IDs linked Whenever a bug is found in a test case, note the JIRA bug ID in the test case's "Actual Result" field and in comments. This creates traceability — you can always trace a bug back to which test case found it.
🎯
Interview ready answer — "Describe your testing process":
"I start by reading the requirements and user stories. I write test scenarios and detailed test cases covering positive, negative, boundary, and UI tests. Once the build is received, I run a smoke test first to confirm it's stable. Then I execute all test cases, log bugs in JIRA with proper steps to reproduce, expected and actual result, and screenshots. After fixes, I run sanity and regression tests. I also perform exploratory testing beyond the written test cases to find edge-case bugs."
🧪 Final Quiz: You find a potential bug — the checkout total shows the wrong amount. What should you do BEFORE logging it as a bug?

Ready to Apply Manual Testing in Real Projects?

STAD Solution's QA training covers Manual Testing with hands-on practice, real test case writing, JIRA, Postman, and 100% placement support.

Explore Courses at STAD Solution →