Manual Testing Tutorial

Complete guide with clear explanations, real-life examples, test design techniques, bug life cycle, interview questions, and career roadmap for beginners and professionals.

Contents

1) Introduction

What is Manual Testing? It is the process of verifying and validating software by executing tests manually, without automation tools. The tester behaves like a real user—navigates screens, enters data, and confirms whether the application meets requirements and provides a smooth experience.

Goal: find defects early, reduce risk, and ensure the product is usable and reliable. Manual testing complements automation; it does not compete with it.

Example: Before releasing a banking app, a tester completes a ₹1 transfer end-to-end (add payee → send OTP → transfer → verify SMS/email receipt). This checks both the flow and the user experience.

2) Why Manual Testing Still Matters

Explain: Automation excels at repeating the same checks, but real products often fail in places scripts don’t anticipate—layout glitches, confusing messages, or odd user behavior. That’s where manual testing shines.

  • User Experience: Humans judge clarity of labels, visual hierarchy, error tone, and trust—all critical for conversions.
  • Exploratory Work: Creative, unscripted testing finds “unknown unknowns.”
  • Cost & Speed (early stage): For MVPs and frequent UI changes, writing scripts is slower than hands-on checks.
Example: An e-commerce app passed all Selenium tests but conversions dropped on iPhone X. Manual checking revealed the “Buy” button overlapped with a fixed banner. One CSS fix restored conversions.

3) SDLC & STLC

SDLC (Software Development Life Cycle)

Typical flow: Requirements → Design → Development → Testing → Deployment → Maintenance. Testers participate from day-1 (clarify requirements, define acceptance criteria).

STLC (Software Testing Life Cycle)

STLC is the tester’s workflow:

  1. Requirement Analysis: clarify scope, risks, testability.
  2. Test Planning: strategy, resources, timeline, environments.
  3. Test Case Development: scenarios, test cases, test data.
  4. Environment Setup: build, test accounts, tools.
  5. Execution: run tests, log defects, retest.
  6. Closure: metrics, lessons learned, sign-off.
Agile Example: In a 2-week sprint, as soon as “Login” is merged to QA, testers execute smoke tests and then full cases; defects feed back to the same sprint.

4) Principles of Testing

Explain: These principles keep testing realistic and effective.

  • Testing shows presence of defects, not absence.
  • Exhaustive testing is impossible → test smart with risk-based coverage.
  • Defects cluster → focus on complex/recently changed modules.
  • Pesticide paradox → refresh your test ideas regularly.
  • Early testing saves cost; fix during design rather than in production.
  • Absence-of-error fallacy → bug-free but wrong product is still a failure.
Example: Instead of 300 login tests, pick high-value ones (valid, invalid, lockout, reset, session timeout) and pair with exploratory sessions.

5) Types & Levels of Testing

Types (what we verify)

  • Functional: features, rules, permissions (login, order, refund).
  • Non-Functional: performance, security, usability, compatibility, reliability.
  • Maintenance: retesting & regression after fixes or enhancements.

Levels (where we verify)

  1. Unit: dev checks small functions (usually automated).
  2. Integration: modules talk correctly (cart ↔ payment).
  3. System: end-to-end in a prod-like env.
  4. UAT: business users validate requirements before go-live.
Example: Food app → Unit (item price calc), Integration (cart to PG), System (browse→order→track), UAT (restaurant partner verifies settlement).

6) Functional vs Non-Functional

Functional Testing

What: Does the system do the right things? Verify inputs/outputs, rules, and flows (happy + edge + negative paths).

Non-Functional Testing

How: Does the system behave well under load? Is it secure, usable, and compatible?

  • Performance: load, stress, endurance (time to first byte, 95th percentile latency).
  • Security: auth/session, input sanitization, access control basics.
  • Usability & Accessibility: ease of use, keyboard/screen reader support.
  • Compatibility: browsers/OS/devices, viewport responsiveness.
Example: Functional = “Reset password email sent.” Non-functional = “Email delivered within 60s under 100 concurrent users.”

7) Test Design Techniques

Why: You can’t test every input. These techniques give max coverage with fewer tests.

  • Equivalence Partitioning (EP): Group inputs into valid/invalid classes; test one from each.
  • Boundary Value Analysis (BVA): Defects often live at edges → test min, max, just inside/outside.
  • Decision Table: Complex rules with combinations; ensures every rule path is covered.
  • State Transition: Output depends on current state (e.g., account locked vs active).
  • Use Case Testing: Real user journeys (happy path + failure path).
Example (BVA): Age allowed 18–60 → test 17, 18, 60, 61.
Example (Decision Table): Coupon applies only if new_user = yes AND cart ≥ ₹1000 → verify all 4 combinations.

8) Writing Test Cases & Test Data

How to write solid test cases

Include: ID, Title, Preconditions, Steps (numbered), Test Data, Expected Result. Keep them atomic, clear, and traceable to requirements (RTM).

Template Mini-Example (Login):
ID: TC_LOGIN_01 • Pre: User registered • Steps: Enter email & password → Click Login • Expected: Dashboard loads within 2s; user name shown.

Test Data Management

  • Prepare valid, invalid, and edge data; anonymize PII.
  • Version your datasets so defects are reproducible.
  • Cover formats (dates, numbers, emails), lengths, locales.
Example: DOB field → Valid: 29-Feb-2020; Invalid: 31-Nov-2021; Empty: show friendly error.

9) Bug Life Cycle & Defect Reporting

Life Cycle

  1. New: tester logs with details.
  2. Assigned: triage sets owner/priority.
  3. In Progress: dev working; status updates.
  4. Fixed: code change complete.
  5. Retested: QA verifies on same build.
  6. Closed (or Reopened if issue persists).

Write defects that get fixed

  • Title: where + what (e.g., “Checkout: COD ‘Pay’ button disabled on iOS”).
  • Environment/build, role/user, exact URL/module.
  • Numbered steps, expected vs actual, reproducibility rate.
  • Evidence: screenshot/video, console/network logs.
  • Severity (impact) & Priority (urgency) with reason.
Example: “Login → OTP paste (iOS Safari) keeps ‘Verify’ disabled. Expected: auto-enable on 6 digits.”

10) Test Execution

What & Why

Execution is where planned tests meet the real build. Choose the right strategy to save time and catch high-risk issues early.

Smoke Testing (Build Verification)

Broad, shallow checks on every new build to confirm it’s stable for deeper testing.

  • Launch, login, essential navigation, one critical transaction.
  • If smoke fails → stop, return build to developers.
Example: After QA deployment, tester verifies “Login → Add to Cart → Checkout.” If payment page 500-errors, testing halts.

Sanity Testing (Focused Quick Check)

After a fix or minor change, verify only the impacted area thoroughly and quickly.

Example: “Coupon stacking” bug fixed → try multiple coupons; ensure exactly one applies and totals are correct.

Regression Testing (Risk-based)

Re-run previous tests to ensure new changes didn’t break existing features. Prioritize high-risk flows; automate repetitive parts when possible.

Example: After adding “Wishlist,” checkout failed for COD. Regression suite caught it before release.

Exploratory & Ad-hoc Testing

Time-boxed sessions where testers learn the product and probe creatively. Ad-hoc are quick random checks by experienced testers.

Example: Rapidly toggling filters + pagination produced duplicate items in listing. Documented with screen recording and steps → fixed.

11) Usability, Accessibility & Acceptance (UAT)

Usability Testing

Explain: Measures how easy the product is to learn and use. Poor usability kills conversion even if features “work”.

  • Clear labels & hierarchy, helpful empty-states, meaningful errors.
  • Mobile ergonomics: target size ≥ 44px, sufficient spacing.
  • Consistent navigation and feedback (spinners, success toasts).
Example: Users abandon cart because “Apply Coupon” hides keyboard submit on small phones. Simple spacing fix increases conversions.

Accessibility (a11y)

Explain: Ensures people with disabilities can use the product (legal/compliance plus a wider audience).

  • Labels on inputs, proper headings, focus order, skip-to-content.
  • Keyboard operability; visible focus style.
  • Color contrast ≥ WCAG AA; alt text for images; ARIA where needed.
Example: Screen reader says “button” instead of “Pay Now” because the label is missing. Adding aria-label="Pay Now" fixes it.

Acceptance Testing (UAT)

Explain: Business stakeholders validate real scenarios on a prod-like environment before sign-off.

  • Use near-real data; run through key workflows; capture exceptions.
  • Document approvals and known limitations.
Example: Banking UAT: “Add payee → transfer ₹1 → receive SMS/email → ledger updated.” Any mismatch blocks release.

12) Metrics, Tools & Common Challenges

Useful Metrics (without vanity)

  • Execution progress: % passed / failed / blocked.
  • Defect metrics: density, leakage (found in prod / total), reopen rate.
  • Cycle time: avg time from defect open → close.

Tooling for Manual Testers

  • Test management: TestRail, Zephyr, Qase.
  • Bug tracking: Jira, Bugzilla, Mantis.
  • Collaboration: Confluence/Notion, Loom (video repro), Slack.
  • Device/Browser: BrowserStack, real devices lab.

Common Challenges & Fixes

  • Time pressure: use risk-based prioritization; smoke first.
  • Flaky builds: agree on entry/exit criteria; stabilize env.
  • Bug ping-pong: write reproducible steps + evidence.

13) Interview Questions (with sample answers)

Fundamentals

Q1. Verification vs Validation?
Answer: Verification = building the product right (reviews, walk-throughs, static). Validation = building the right product (running tests on the app, dynamic). Example: reviewing BRD is verification; executing login test cases is validation.

Q2. STLC – correct order?
Answer: Requirement Analysis → Test Planning → Test Case Development → Environment Setup → Execution → Closure.

Q3. Bug life cycle?
Answer: New → Assigned → In Progress → Fixed → Retested → Closed / Reopened.

Q4. Smoke vs Sanity?
Answer: Smoke = quick build health. Sanity = focused check after a fix (e.g., verify coupon fix only).

Q5. Severity vs Priority?
Answer: Severity = technical impact; Priority = urgency. Typo on home (low sev, high pri). Payment failure (high sev, high pri).

Design & Execution

Q6. Boundary Value Analysis example?
Answer: Age 18–60 → test 17, 18, 60, 61; edges catch off-by-one bugs.

Q7. Equivalence Partitioning?
Answer: Split inputs into valid/invalid classes; test one from each to reduce cases.

Q8. What is RTM?
Answer: Requirement Traceability Matrix maps requirement → test cases → defects to ensure coverage.

Q9. Positive vs Negative testing?
Answer: Positive = valid inputs (expect success). Negative = invalid inputs (expect safe error). Both are needed.

Q10. Exploratory testing—when?
Answer: Early in feature delivery, after major UI changes, or when defects cluster; it finds unknown issues quickly.

Team & Process

Q11. Prioritize tests under time pressure?
Answer: Risk-based → critical business flows, recent changes, high-defect areas first; defer low-risk edge cases.

Q12. Handle flaky issues?
Answer: Capture environment, logs/HAR, video repro; mark “intermittent,” add system info; pair with dev to isolate.

Q13. What to check in UAT?
Answer: Real business scenarios, roles/permissions, reports, integrations, and acceptance criteria with sign-off.

Q14. Compatibility plan?
Answer: Define supported browsers/OS/devices, build a matrix; test top traffic first, then edge combos.

Q15. Real bug you found?
Answer: Explain context → steps → expected vs actual → impact → your fix/communication. Keep it concise.

14) Real-Life Case Studies (detailed)

Case 1 — OTP Not Delivered (Root cause: provider issue)

Context: Banking app login via OTP. UAT users on certain carriers reported no SMS.

Investigation: Tester tried multiple numbers; Airtel worked, Jio didn’t. Network logs showed request sent; SMS gateway logs missing callbacks.

Example Steps: Enter Jio number → Send OTP → wait 60s. Expected: OTP SMS received. Actual: none.

Impact: ~20% customers locked out. Fix: multi-gateway fallback + alert on delivery failures.

Case 2 — Double Discount via Coupon Stacking

Context: Festive sale allowed “NEW50”. Some users applied another code too.

Investigation: Decision rule missing on server; UI blocked but API allowed second coupon.

Example: Cart ₹1000 → NEW50 → ₹500; add FESTIVE20 → became ₹400. Should be ₹500 max.

Fix: Backend rule: one active coupon per order; API validation added; regression case created.

Case 3 — Safari Date Picker Invisible

Context: Travel site fine on Chrome; iOS Safari couldn’t open calendar.

Root cause: CSS property unsupported in Safari; z-index caused overlay.

Steps: Open booking on iPhone → tap “Select date”. Expected: calendar visible. Actual: blank.

Fix: CSS fallback + cross-browser testing matrix added.

Case 4 — Transaction Debited but Not Credited

Context: High traffic caused DB timeouts. Debit succeeded; credit failed before commit.

Action: Added atomic transaction with rollback, idempotent retry; real-time reconciliation job.

Case 5 — Search Crash for Far Future Dates

Context: Flights search crashed for dates > 6 months ahead.

Cause: Missing validation; downstream API returned 500.

Fix: Client- and server-side guards; friendly message “No flights available beyond 180 days”.

15) Career Path & Next Steps

Growth Path

Manual Tester → QA Analyst → Automation Engineer → Test Lead → QA Manager / SDET / QA Architect.

Skills to add (next 3–6 months)

  • Strong test design (EP/BVA/decision tables), SQL basics, API testing with Postman.
  • Automation basics (Selenium + Java/Python), Git, CI concepts.
  • Communication: crisp defect reports, demo skills, stakeholder updates.

Portfolio & Practice

  • Publish 3–4 mini projects: test cases, bug reports, and a short Loom video each.
  • Practice sites: demo.opencart.com, OrangeHRM, “the-internet” by Herokuapp.