- Home
- Manual Testing Tutorial
Manual Testing
A to Z Guide
Master manual testing from scratch — concepts, types, test case writing, techniques and best practices. Simple language, real examples.
In manual testing, the tester acts like a real user. They click buttons, fill forms, navigate screens, and verify that everything works as it should — based on the requirements document or user story.
Why manual testing is still essential:
- →Human judgement A human can notice that a button "looks off", a screen "feels confusing", or an error message "doesn't make sense" — things that no automated script can detect.
- →Exploratory testing Testers can think creatively, test unusual scenarios, and explore the application freely — uncovering edge case bugs that were never anticipated.
- →UI/UX validation Checking if the layout, design, fonts, colours, and spacing match the approved design mockup — only a human eye can do this properly.
- →New features in early stages When a feature is being built for the first time, manual testing is faster to start with before investing time in automation scripts.
- →UAT (User Acceptance Testing) Real users or business stakeholders test the product before go-live — this is always manual, never automated.
The 6 phases of SDLC:
| Type | Code Knowledge | Focus | Who Performs | Example |
|---|---|---|---|---|
| ⬛ Black Box | None — no code visibility | Input → Output behaviour | Manual QA testers, business users | Testing login page with valid/invalid credentials |
| ⬜ White Box | Full — reads source code | Internal logic, code paths | Developers, SDET engineers | Testing interest calculation formula logic in code |
| 🔲 Grey Box | Partial — knows some internals | Functional + internal validation | Senior QA / automation engineers | Checking API response AND verifying DB record |
Types of Functional Testing:
- 1Unit Testing Testing the smallest individual unit/module of code in isolation. Usually done by developers, not QA.
Example: Testing only the "calculate discount" function — giving it different inputs and checking its output. - 2Integration Testing Testing how multiple modules or components work together. Checks the communication and data flow between units.
Example: After login module is integrated with the profile module, verify that logging in correctly loads the user's profile data. - 3System Testing Testing the entire, fully integrated system against the requirements. Done by the QA team in the testing environment. This is the most important QA phase.
Example: Testing the entire e-commerce app end-to-end — register, login, browse products, add to cart, checkout, payment, order confirmation. - 4User Acceptance Testing (UAT) Final testing done by actual end users or business clients to confirm the software meets real-world requirements before go-live. Done in the staging/pre-production environment.
Example: The bank's business team tests the new loan application feature to confirm it matches their defined business rules before release to customers.
✓ Enter valid email + wrong password → Error "Invalid credentials" shown
✓ Leave email field empty → Error "Email is required" shown
✓ Enter unregistered email → Error "Account not found" shown
✓ Click "Remember me" → Next visit auto-fills credentials
✓ "Forgot password" link navigates to reset password page
Key types of Non-Functional Testing:
- 1Performance Testing Tests how the system behaves under a specific workload — response time, speed, stability, and scalability.
Example: Does the product search page load in under 2 seconds when 1,000 users are browsing simultaneously? - 2Load Testing Tests how the system behaves under a normal and expected load. Validates that the system can handle the anticipated number of users.
Example: Testing that the app handles 500 concurrent users without slowing down — this is the expected peak load. - 3Stress Testing Tests the system beyond its normal operating capacity to find its breaking point — what happens when the system is overloaded?
Example: Gradually increasing users from 500 to 5,000 to find at what point the server crashes or starts giving errors. - 4Usability Testing Tests how easy and intuitive the application is to use for real users. Measures user satisfaction, ease of navigation, and clarity.
Example: Asking 5 non-technical users to complete a purchase on the app and observing where they get confused or stuck. - 5Security Testing Tests that the application is protected from unauthorised access, data breaches, SQL injection, XSS, and other attacks.
Example: Checking that entering admin' OR '1'='1 in the login field does NOT bypass authentication (SQL injection test). - 6Compatibility Testing Tests that the software works correctly across different browsers, operating systems, screen sizes, and devices.
Example: Testing the web app on Chrome, Firefox, Safari, and Edge — and also on Android and iOS mobile phones.
Functional = Does it WORK correctly? (Login button logs you in ✓)
Non-Functional = Does it work WELL? (Login page loads in <1 second ✓ | Works on all browsers ✓)
These three are the most frequently asked about in QA interviews. Each serves a distinct purpose and is done at a different stage.
When: Every time a new build is received from developers.
Scope: Broad — covers the entire application, but only at a surface level.
Duration: 15–30 minutes. Quick and shallow.
Done by: Developers or QA team.
Real example: New build arrives for an e-commerce app. Smoke tests: Can you open the app? Can you log in? Does the home page load? Can you search for a product? If any of these fail → build rejected.
When: After a specific bug fix is received for retesting.
Scope: Narrow — only the fixed area and related modules.
Duration: Quick check, not an exhaustive test.
Done by: QA team only (not documented as formally as other test types).
Real example: Developer fixed the "OTP not sending" bug. Sanity test: Verify OTP works now. Also check that login, registration, and password reset nearby features weren't broken.
When: After every new build, every bug fix, or before a major release.
Scope: Full application — all existing features are retested.
Duration: Can take hours to days. Often automated for speed.
Done by: QA team only.
Real example: Developer adds a new "Wishlist" feature to the e-commerce app. Regression testing: Re-run all test cases for Login, Cart, Checkout, Payment, Profile, Search — to make sure adding "Wishlist" didn't break anything that was previously working.
| Smoke | Sanity | Regression | |
|---|---|---|---|
| When done | After new build | After specific fix | After any code change |
| Scope | Entire app, surface level | Specific module only | Entire app, in depth |
| Speed | Very fast (15–30 min) | Fast (targeted) | Slow (hours to days) |
| Goal | Build stable for testing? | Specific fix working? | Did new change break anything? |
| Done by | Dev or QA | QA only | QA only |
Smoke = wide opening (broad, quick scan of whole app)
Sanity = middle (narrowed to specific changes)
Regression = bottom (deep, full coverage)
Mandatory components of every test case:
Example of a complete, well-written test case:
2. User is on the Login page (URL: /login)
3. User is currently logged out
2. Enter email: priya@example.com in the Email field
3. Enter password: Test@123 in the Password field
4. Click the "Login" button
❌ Vague steps: "Fill in the login form" → ✅ "Enter email in Email field"
❌ Multiple actions per step: "Enter email, password and click login" → ✅ Three separate steps
❌ Vague expected result: "Login works" → ✅ "User redirected to /dashboard, name shown in header"
❌ No test data: "Enter valid email" → ✅ "Enter: priya@example.com"
→ TC_001: Valid email + valid password → login successful
Test Scenario 2: Verify login with invalid credentials
→ TC_002: Valid email + wrong password → error shown
→ TC_003: Unregistered email + any password → error shown
→ TC_004: Empty email field → "Email required" error
→ TC_005: Empty password field → "Password required" error
Test Scenario 3: Verify account lockout after multiple failed attempts
→ TC_006: 5 consecutive wrong passwords → account locked for 30 minutes
Test Scenario 4: Verify "Forgot Password" functionality
→ TC_007: Enter registered email → OTP/reset link received
→ TC_008: Enter unregistered email → appropriate error shown
The 4 test case execution statuses:
- ✓Pass PASS The actual result matches the expected result exactly. The feature is working as specified. No bug is logged.
Example: Entering valid credentials → User is redirected to dashboard ✅ Matches expected result. - ✗Fail FAIL The actual result does NOT match the expected result. A defect is found. A bug is immediately logged in JIRA with all details.
Example: Entering valid credentials → Page shows blank screen instead of dashboard ❌ Bug logged. - ⊘Blocked BLOCKED The test case cannot be executed because of a dependency or another bug. A pre-condition has not been met. Not the tester's fault — blocked by an external issue.
Example: Can't test "Add to Cart" because the Login feature itself has a critical bug and tester cannot log in at all. - –Skipped / Not Executed SKIPPED The test case was intentionally not executed — due to time constraints, out-of-scope for current sprint, or feature not yet developed.
Example: "Test promo code feature" was skipped because the promo code feature is not in the current sprint's scope.
Fail = You ran the test and the result was wrong (bug found).
Blocked = You COULD NOT even run the test because something else is preventing it.
PASS 34 (68%) — Feature working correctly
FAIL 8 (16%) — Bugs logged in JIRA
BLOCKED 5 (10%) — Waiting for critical bugs to be fixed
SKIPPED 3 (6%) — Out of current sprint scope
→ Test Completion Rate: 94% | Pass Rate: 81%
| Positive Testing | Negative Testing | |
|---|---|---|
| Goal | System works with valid input | System handles invalid input gracefully |
| Input type | Valid, expected, normal | Invalid, unexpected, out-of-range, null |
| What we verify | Correct output / success | Proper error message / no crash |
| Example | Login with correct credentials → dashboard | Login with empty fields → validation error shown |
→ Enter "Priya Sharma" → Accepted ✅
→ Enter "A" (single character) → Accepted ✅ (if min 1 char allowed)
Negative (invalid input):
→ Leave name field empty → "Name is required" error ✅
→ Enter "Priya123" (numbers in name) → Error shown ✅ (if only letters allowed)
→ Enter 300 characters → Error "Max 100 characters allowed" ✅
→ Enter special characters !@# → Error shown ✅
→ Enter SQL injection: '; DROP TABLE users; -- → Should NOT crash, show error ✅
Rule: If a field accepts values from 1 to 100, test these boundary values:
• 0 (just below minimum) → Should FAIL (invalid)
• 1 (minimum boundary) → Should PASS (valid)
• 50 (middle value) → Should PASS (valid)
• 100 (maximum boundary) → Should PASS (valid)
• 101 (just above maximum) → Should FAIL (invalid)
Real Example — Age field (must be 18 to 60):
Test with: 17 ❌ | 18 ✅ | 39 ✅ | 60 ✅ | 61 ❌
Same Age field (18 to 60) — Partitions:
Partition 1 (Invalid — below range): 1 to 17 → test one value, e.g. 10
Partition 2 (Valid): 18 to 60 → test one value, e.g. 30
Partition 3 (Invalid — above range): 61 to 150 → test one value, e.g. 80
Instead of testing 150 values → test just 3 representative values. Same confidence, far less time.
Why boundaries matter: If age validation is coded as if age > 18 (instead of >=), then age = 18 will incorrectly fail. BVA catches this boundary condition bug perfectly.
These practices separate a professional QA engineer from a beginner.
- 1Always read the requirements before writing test cases Understand the feature fully — what is the expected behaviour? What are the edge cases? Discuss any ambiguities with the developer or BA before testing begins. Never test blindly.
- 2Test both positive AND negative scenarios for every feature Never just test the happy path. For every form field, test empty, too long, wrong format, special characters. Most production bugs come from untested negative scenarios.
- 3Write test cases before testing starts (Test-First approach) Writing test cases forces you to think through all scenarios before touching the software. Testers who write cases first find more bugs than those who test without documentation.
- 4One test case = one thing to verify A test case should test exactly one condition. Don't combine "test login AND test logout" in one case. Separate concerns means easier debugging when something fails.
- 5Reproduce the bug 3 times before logging it Before logging a bug, reproduce it at least 3 times to confirm it's consistent. Some issues are one-time glitches, not real bugs. Logging unreproducible bugs wastes developer time.
- 6Test on multiple browsers and devices Never assume it works everywhere because it works in Chrome. Always test on at least Chrome, Firefox, and Safari. Test on both desktop and mobile views. Browser compatibility bugs are extremely common.
- 7Update test cases when requirements change If a feature is updated, update the corresponding test cases immediately. Outdated test cases are worse than no test cases — they create false confidence.
- 8Test UI against the approved design mockup (Figma/Zeplin) Compare every screen pixel by pixel against the approved design — spacing, font size, colours, alignment, hover states, error states. UI bugs are real bugs, not minor issues.
- 9Explore beyond the test cases — Exploratory Testing After executing written test cases, spend time freely exploring the feature. Try unusual combinations, fast clicks, slow internet, unexpected inputs. The most interesting bugs are found in exploration.
- 10Keep test case IDs and JIRA bug IDs linked Whenever a bug is found in a test case, note the JIRA bug ID in the test case's "Actual Result" field and in comments. This creates traceability — you can always trace a bug back to which test case found it.
"I start by reading the requirements and user stories. I write test scenarios and detailed test cases covering positive, negative, boundary, and UI tests. Once the build is received, I run a smoke test first to confirm it's stable. Then I execute all test cases, log bugs in JIRA with proper steps to reproduce, expected and actual result, and screenshots. After fixes, I run sanity and regression tests. I also perform exploratory testing beyond the written test cases to find edge-case bugs."
Ready to Apply Manual Testing in Real Projects?
STAD Solution's QA training covers Manual Testing with hands-on practice, real test case writing, JIRA, Postman, and 100% placement support.
Explore Courses at STAD Solution →