ISTQB Practice Test PDF (Free Printable 2026)
Pass your ISTQB exam on the first attempt. Practice questions with detailed answer explanations, hints, and instant scoring.
ISTQB Foundation Level Certification Overview
The ISTQB Certified Tester Foundation Level (CTFL) is the entry point for software testing professionals worldwide. Governed by the International Software Testing Qualifications Board, the Foundation Level exam tests your knowledge of fundamental testing concepts, techniques, and processes that apply across every software development context — from waterfall to agile, from safety-critical systems to consumer apps.
The exam contains 40 multiple-choice questions and must be completed in 60 minutes. A passing score is 65%, meaning you need at least 26 correct answers. Questions span three cognitive levels: K1 (remember), K2 (understand), and K3 (apply). Most questions are K2, requiring you to explain or distinguish concepts rather than simply recall definitions. The istqb practice test questions in this PDF follow the same format and distribution.
Fundamentals of Testing
The first domain of the CTFL syllabus asks why testing exists and what principles guide professional testers.
Why testing is necessary. Software failures have caused financial losses, damaged reputations, and in safety-critical industries — healthcare, aviation, automotive — put lives at risk. A defect is a flaw in the code; an error is a human mistake that introduces a defect; a failure is what the user experiences when a defect is executed. Testing cannot eliminate all defects, but it reduces risk to an acceptable level and gives stakeholders confidence that the software meets its requirements.
Seven testing principles. The ISTQB syllabus defines seven foundational principles every CTFL candidate must know:
- Testing shows the presence of defects, not their absence — passing all tests does not prove software is bug-free.
- Exhaustive testing is impossible — you cannot run every combination of inputs and conditions on any non-trivial system.
- Early testing saves time and money — defects found in requirements cost far less to fix than defects found in production.
- Defect clustering — a small number of modules typically contain the majority of defects (the Pareto principle applied to bugs).
- The pesticide paradox — repeating the same tests over time finds fewer new defects; test suites must be reviewed and updated regularly.
- Testing is context-dependent — the techniques and rigor appropriate for a banking application differ from those for a marketing website.
- Absence of errors fallacy — finding and fixing defects is worthless if the system does not meet the user's actual needs.
Test process. The CTFL defines a generic test process: planning (objectives, scope, resources, schedule, risk), monitoring and control (tracking progress against the plan), analysis (what to test — identifying test conditions from the test basis), design (how to test — creating test cases and identifying test data), implementation (assembling test procedures and environments), execution (running tests, logging results, reporting defects), and completion (archiving assets, handing off to operations, retrospectives).
Testing Throughout the Software Development Lifecycle
Testing does not live in a silo — it is woven into whichever SDLC model a team uses.
Waterfall vs. agile testing. In a traditional waterfall project, testing follows development as a distinct phase. In agile environments, testing is continuous: testers collaborate with developers and product owners within each sprint, writing and running tests alongside new code. The CTFL syllabus requires you to understand both models and recognise when each is appropriate.
V-model. The V-model pairs each development phase with a corresponding test phase. Requirements analysis pairs with acceptance testing, system design pairs with system testing, component design pairs with integration testing, and coding pairs with component (unit) testing. The left side of the V represents development; the right side represents verification. Test planning begins early, on the left side, even though execution happens on the right.
Test levels. The four standard test levels map to the V-model: component testing (also called unit testing) targets individual modules in isolation, often automated by developers; integration testing checks the interfaces between components or systems; system testing validates the complete integrated product against its requirements; and acceptance testing (including user acceptance testing, UAT) confirms the product is ready for deployment from the business or user perspective.
Test types. Orthogonal to test levels, test types describe what you are testing rather than at what granularity. Functional testing validates what the system does. Non-functional testing covers quality characteristics like performance, security, usability, and reliability. Structural (white-box) testing evaluates code coverage — whether particular paths, branches, or statements have been exercised. Confirmation testing re-executes tests that previously found defects to verify the fix. Regression testing re-runs a broader suite to confirm that changes have not broken existing functionality.
Static Testing
Static testing examines work products without executing code. It can find defects earlier and more cheaply than dynamic testing.
Reviews. ISTQB defines four review types on a spectrum from informal to highly structured. An informal review involves an author asking a colleague for feedback — no documented process required. A walkthrough is author-led: the author guides reviewers through the document, explaining it and collecting questions. A technical review brings peers together to discuss technical issues against defined objectives. An inspection is the most formal type: roles are assigned (moderator, author, reviewers, scribe), entry and exit criteria are defined, defects are logged systematically, and metrics are collected.
Static analysis. Tools rather than people examine the code or documentation. Linters flag syntax errors, naming violations, and style issues. Complexity metrics such as cyclomatic complexity identify modules that are risky to test and maintain. Static analysis can also detect security vulnerabilities, unreachable code, and data-flow anomalies before a single test is run.
Test Techniques
CTFL candidates must understand when and how to apply black-box, white-box, and experience-based techniques.
Black-box techniques. These derive test cases from the specification without reference to internal code structure. Equivalence partitioning divides input data into classes where the system is expected to behave the same way; you test one representative value per class. Boundary value analysis targets the edges of those classes, where defects cluster most densely. Decision table testing maps every combination of conditions to expected actions, ensuring no business rule combination is overlooked. State transition testing models a system as a set of states and the events that trigger transitions between them. Use case testing derives test scenarios from the interactions between actors and the system described in use cases.
White-box techniques. These measure how thoroughly tests exercise the code. Statement coverage measures the percentage of executable statements run by the test suite. Decision (branch) coverage measures the percentage of true and false outcomes of every decision point covered — a stricter criterion that requires at least one test per branch direction.
Experience-based techniques. Exploratory testing relies on the tester's skill and curiosity: test design and execution happen simultaneously, guided by a charter rather than a rigid script. Checklist-based testing uses a prepared list of conditions the tester verifies. Error guessing applies knowledge of common defect types and past project history to target areas most likely to contain bugs.
Test Management
Managing a test effort involves balancing scope, time, budget, and risk.
Risk-based testing. Test managers use risk analysis to prioritise test cases and allocate effort where failure would have the greatest impact or is most likely. Product risk (what could go wrong with the software) drives the test scope; project risk (what could go wrong with the project) drives planning and contingency.
Defect lifecycle. A defect moves through states: new, assigned, open, fixed, retested, closed (or re-opened). Understanding the defect lifecycle ensures testers, developers, and managers share a common language and that no defect falls through the cracks.
Test metrics. Common metrics reported by test managers include test coverage (what percentage of the test basis has been covered), pass rate (what percentage of executed tests passed), and defect density (defects per function point or KLOC). Metrics inform go/no-go decisions and feed retrospectives that improve future test processes.

How to Use This PDF for Exam Preparation
A printable practice test is a different study tool from a screen-based quiz. You work without autocomplete, without instant feedback, and without the option to click back. That closer match to real exam conditions is exactly why printed practice is valuable in the final days before sitting the CTFL.
Start by reading the question stem carefully. ISTQB questions often include negative qualifiers — "which of the following is NOT a testing principle" — that are easy to miss when reading quickly. Underline or circle the key word before choosing an answer.
After completing the PDF set, score yourself and categorise every wrong answer by syllabus chapter. If you miss two or more questions in a single chapter, return to the syllabus and re-read that section before attempting the next practice set. Targeted review is more efficient than re-reading the entire syllabus.
For K3 (apply) questions, work through the scenario step by step. Equivalence partitioning and boundary value analysis questions give you a set of inputs and ask which test cases provide the best coverage — draw the partitions on paper before selecting your answer.
Time management matters even on a 60-minute exam. Allocate roughly 90 seconds per question, flag any you are unsure about, and return to them after completing the rest. Spending five minutes on a single hard question can cost you the time needed to answer three easier ones.
The ISTQB exam is closed-book, so familiarity with definitions matters. Create a personal glossary of the terms you keep confusing — defect vs. error vs. failure, verification vs. validation, static vs. dynamic testing — and review it daily in the week before the exam.
Common Mistakes CTFL Candidates Make
Thousands of candidates sit the CTFL every year, and certain errors appear repeatedly. Knowing what trips people up is almost as valuable as knowing the syllabus content itself.
Confusing defect, error, and failure. An error is a human action — a developer misreads a requirement. A defect is the result of that error — a line of code that does the wrong thing. A failure is what the user experiences when that defect executes — the application crashes or returns a wrong value. The chain is: error → defect → failure. Many exam questions test exactly this distinction.
Mixing up verification and validation. Verification asks "are we building the product right?" — it checks conformance to specification. Validation asks "are we building the right product?" — it checks fitness for user need. Static reviews are primarily verification activities; acceptance testing is primarily validation.
Underestimating non-functional testing questions. Performance, security, and usability topics feel less tangible than functional test design, so candidates spend less time on them. CTFL questions on non-functional testing are straightforward if you have read the definitions; they become hard if you have skipped that section entirely.
Assuming agile removes the need for test planning. Agile teams plan at multiple levels — release, sprint, and session. The CTFL syllabus explicitly covers testing in agile contexts, including the whole-team approach and the tester's role in sprint ceremonies. Questions about agile testing appear in the exam and candidates who studied only waterfall-oriented content are caught off guard.
Neglecting the generic test process. Candidates sometimes memorise the seven principles and the test techniques but overlook the test process phases. Exam questions ask which activity belongs in which phase — for example, identifying test conditions belongs in test analysis, not test design — and getting these wrong is an avoidable loss of marks.
The printable PDF in this resource covers all six CTFL syllabus chapters. Work through it under timed conditions, review your errors systematically, and you will go into exam day with a clear picture of where you stand.
ISTQB Key Concepts
What is the passing score for the ISTQB exam?
Most ISTQB exams require 70-75% to pass. Check the official exam guide for exact requirements.
How long is the ISTQB exam?
The ISTQB exam typically allows 2-3 hours. Time management is critical for success.
How should I prepare for the ISTQB exam?
Start with a diagnostic test, create a 4-8 week study plan, and take at least 3 full practice exams.
What topics does the ISTQB exam cover?
The ISTQB exam covers multiple domains. Review the official content outline for the complete list.