SQL Practice Tests: Free Online Drills for Real-World Query Skills
SQL practice online: free practice tests with real-world queries, JOINs, subqueries, window functions, and interview-style problems for every skill level.

SQL looks simple right up until the moment you need to write a query against an unfamiliar schema under pressure. That's when the gap between knowing syntax and actually using SQL becomes obvious. The fix isn't more reading. It's reps. SQL is a hands-on skill, and the only reliable way to build it is to write queries — lots of them — against realistic data, then check whether the output matches expectations.
This guide walks through the SQL practice landscape: what to drill, where to drill it, and how to structure practice sessions so that what you learn actually sticks. We'll cover the fundamentals (SELECT, WHERE, GROUP BY, ORDER BY), the intermediate stuff that separates beginners from working analysts (JOINs, subqueries, CTEs, window functions), and the harder material that interviewers actually ask about (complex aggregations, recursive queries, performance reasoning). You'll also find free practice resources, common pitfalls, and a study plan that fits around a full-time schedule.
Why Drills Beat Tutorials
SQL is one of the few technical skills where understanding the syntax does not translate to being able to write it. You can read about LEFT JOIN forty times and still freeze when asked to join three tables under interview pressure. The only thing that fixes that is writing queries against real schemas, getting them wrong, and seeing exactly why they were wrong.
Three Levels of SQL Practice
SELECT, WHERE, ORDER BY, LIMIT, basic aggregations. Single-table queries. Build the muscle memory for syntax.
JOINs, GROUP BY with HAVING, subqueries, CTEs, simple window functions. The level needed for most data analyst jobs.
Complex window functions, recursive CTEs, pivot operations, performance tuning. Senior analyst and engineer interview material.

Foundations matter more than people admit. If basic SELECT syntax requires conscious thought, you cannot tackle joins or window functions yet. Drill until SELECT col FROM table WHERE condition flows without thinking. Same for filtering with multiple conditions using AND/OR, sorting with ORDER BY ASC or DESC, and limiting result sets.
Pattern matching with LIKE and the wildcards % and _ trips up beginners — practice queries like finding all customers whose email ends in .edu, or all product names containing 'pro'. NULL handling is a separate skill: NULL is not equal to anything, including itself, so you need IS NULL and IS NOT NULL rather than the equals operator.
Aggregations come next. COUNT, SUM, AVG, MIN, MAX. The key insight is that aggregations collapse rows together, which means anything not aggregated needs to appear in GROUP BY. This catches every beginner. The error message is unhelpful — you just learn through repetition. Practice queries like 'how many orders per customer', 'average order value by month', 'maximum salary by department'. HAVING filters aggregates the way WHERE filters rows; mixing them up is the single most common beginner mistake.
Joins are where many learners hit a wall. INNER JOIN keeps only rows with matches in both tables. LEFT JOIN keeps all rows from the left table even when there's no match on the right (NULLs fill in for missing columns). RIGHT JOIN is the mirror image. FULL OUTER JOIN keeps everything from both sides. The mistake everyone makes is forgetting that LEFT JOIN with a WHERE condition on the right table effectively turns into an INNER JOIN. The fix is putting that condition in the ON clause instead of the WHERE clause.
SQL Job Market
Core SQL Patterns to Drill
WHERE with AND, OR, NOT, IN, BETWEEN, LIKE, IS NULL. ORDER BY with multiple columns and direction. LIMIT and OFFSET for pagination. Master these first — they appear in nearly every query.
Subqueries are conceptually straightforward but introduce a class of errors that takes practice to avoid. A subquery in the WHERE clause filters the outer query based on a condition that itself requires a query. The classic example: 'find all employees whose salary is above the average' — the average is itself a query, embedded in the WHERE clause. Correlated subqueries reference the outer query inside the subquery, which makes them powerful but also slow. Use CTEs instead when readability matters; use indexed columns when performance matters.
CTEs deserve their own discussion because they transform how you write complex queries. Rather than nesting subqueries three levels deep — at which point you and everyone else stop being able to read the query — you can name each step. WITH active_customers AS (...), monthly_orders AS (...) SELECT ... lets you build complex queries one logical step at a time. Most analysts who graduate from junior to senior owe it largely to comfort with CTEs.
Window functions are the topic that separates people who write basic queries from people who actually solve business problems with SQL. They let you compute aggregates without collapsing rows. ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date) gives each customer's orders a sequence number. SUM(amount) OVER (PARTITION BY customer_id ORDER BY order_date) creates a running total per customer. RANK() handles ties differently from ROW_NUMBER. LAG and LEAD let you compare a row to the previous or next row in a window. These are the queries that show up in real analyst work and in interviews. Master them.

Using = NULL instead of IS NULL. Forgetting that aggregations require GROUP BY for non-aggregated columns. Filtering on the right side of a LEFT JOIN in the WHERE clause (turning it into an INNER JOIN). Confusing UNION and UNION ALL. Using SELECT * in production. Missing parentheses in compound WHERE conditions.
Free SQL Practice Resources
Interactive tutorials with real datasets (movies, world countries, soccer). Free, browser-based, progressive difficulty from beginner to window functions.
Hundreds of SQL problems with difficulty ratings. Used by companies for technical screening, so the problems mirror real interview questions.
70+ SQL problems from easy to hard. Problems are often based on actual interview questions from FAANG companies.
Postgres-specific practice with a single sample database (a country club). Excellent for learning PostgreSQL features like window functions.
Free analyst-focused SQL tutorial with realistic business datasets. Covers everything from basics through advanced analytics.
Free online sandbox for writing and testing SQL against MySQL, PostgreSQL, SQLite, and others. Build your own schemas and queries.
How to structure SQL practice sessions matters as much as which resource you use. Twenty minutes a day beats four hours on Saturday — your brain consolidates patterns during sleep, and consistent daily exposure builds the kind of pattern recognition that lets experienced analysts read queries quickly. Pick a single resource, work through its problems in order, and don't skip the easy ones because the easy ones are where syntax becomes automatic.
For each problem, write your answer before looking at the solution. Even if you're stuck. Especially if you're stuck. The struggle is where learning happens — looking at the solution before attempting transforms the exercise from active learning into passive reading, and the retention rate drops dramatically. After you write your query, run it. If it returns wrong results, debug it before checking the answer. The debugging process teaches more than the correct answer ever does.
When you do check the official solution, compare it to yours line by line. Did they use a CTE where you used a subquery? Did they handle NULLs differently? Did they avoid a JOIN you didn't need? Each difference is a learning opportunity. Take notes on patterns you didn't think of. A simple text file with 'patterns I want to remember' grows into a personal SQL handbook over weeks of practice.
SQL Practice Daily Routine
- ✓Pick one problem at your current level (not too easy, not impossible)
- ✓Read the problem twice before touching the keyboard
- ✓Sketch the logic on paper or in comments before writing SQL
- ✓Write the query yourself — no Googling, no looking at solution
- ✓Run it and verify the output matches expected results
- ✓If wrong, debug methodically: check JOINs first, then WHERE, then aggregation
- ✓Compare your final solution to the official answer
- ✓Write down any new patterns or syntax you learned
- ✓Review your notes file weekly to reinforce patterns
Interview-style SQL practice deserves separate attention because the patterns repeat across companies. Interviewers are not testing whether you can recall syntax — they're testing whether you can reason about data and translate business questions into queries. The classic patterns: top N per group (window functions with ROW_NUMBER), nth highest value (DENSE_RANK or LIMIT/OFFSET with caveats), date range calculations, customer cohort analysis, churn analysis, running totals and moving averages.
Top N per group is the single most common SQL interview question. The setup varies but the structure is constant: rank rows within groups, then filter to keep only the top N. The naive approach uses correlated subqueries — slow and ugly. The right approach uses window functions: ROW_NUMBER() OVER (PARTITION BY group_col ORDER BY value DESC) as rn, then filter WHERE rn ≤ N. If you can write this from memory in under three minutes, you're ahead of most candidates. If not, drill it until you can.
Date and time problems are the second most common category. Find all customers who placed orders in two consecutive months. Find users who returned within seven days. Calculate retention by cohort. These require comfort with date arithmetic, EXTRACT or DATE_PART for pulling components from timestamps, INTERVAL syntax for adding or subtracting time, and DATE_TRUNC for rounding to months or weeks. Practice with real timestamp data, not just date columns.

Database Dialect Differences
Most feature-complete free option. Supports advanced window functions, recursive CTEs, full-text search, JSON columns. Use SERIAL or GENERATED for auto-increment. String functions are POSIX-flavored.
Performance reasoning is the topic that turns competent SQL writers into great ones. You don't need to know optimizer internals for most jobs — but you should understand a few key concepts. Indexes speed up lookups but slow down writes; SELECT * defeats column-store optimizations; subqueries are sometimes better written as JOINs and sometimes worse; OR conditions often prevent index use while UNION ALL with two separate queries doesn't. EXPLAIN (or EXPLAIN ANALYZE) shows you the query plan and is your friend.
The 'why is this query slow?' question is also common in interviews. The instinct most beginners have is to add an index. Sometimes that's right. Often it's not. The real answer is to understand what the optimizer is doing — what indexes exist, whether it's using them, whether row estimates match reality, and whether the join order makes sense. EXPLAIN ANALYZE on a query that's actually run shows you actual versus estimated row counts, which is the single most useful diagnostic for query plan issues.
Beyond indexes, common performance issues include over-fetching (SELECT * when you need three columns), unnecessary DISTINCT (often hiding a join multiplication bug), GROUP BY when DISTINCT would suffice (or vice versa), and missing partitions on very large tables. The fix usually isn't a single magic optimization — it's understanding what the query is asking the database to do and making that simpler.
Working Through SQL Practice Daily
- +Pattern recognition builds steadily — patterns become automatic with daily exposure
- +Easier to schedule 20 minutes than a multi-hour block
- +Sleep consolidates learning, so spread practice maximizes retention
- +Builds endurance for technical interviews that may include 60-90 minutes of SQL
- +Confidence grows visibly week over week, which keeps you motivated
- −Daily commitment is harder to maintain than weekend cramming for some people
- −Progress feels slow in week one before patterns start clicking
- −Easy to coast on familiar patterns without pushing into harder material
Common patterns worth memorizing because they show up everywhere: deduplication using ROW_NUMBER OVER PARTITION BY in a CTE, keeping rn=1. Filling in missing dates using a date series joined to your data. Computing percent change using LAG. Bucket assignment using NTILE or CASE WHEN. Pivoting rows into columns using conditional aggregation. Detecting gaps in sequences using LAG and comparing dates. Finding the most recent record per group using ROW_NUMBER or DISTINCT ON in Postgres.
Recursive CTEs are advanced but worth learning for tree and hierarchy data. WITH RECURSIVE cte_name AS (anchor query UNION ALL recursive query) lets you walk a parent-child tree without writing application code. Employee org charts, comment threads, category hierarchies, bill-of-materials data — all of these are easier with recursive CTEs. The pattern is: start with the root rows, then repeatedly join to children until no more rows are returned.
Final note on practice strategy: build a project. Pick a public dataset (Kaggle, government open data, GitHub's public BigQuery datasets) and ask interesting questions of it. Write the queries. Visualize the results. The practice problems on tutorial sites build skills, but a real project shows you whether you can apply those skills end-to-end. Most analysts and engineers learned SQL by needing it for actual work — practice sites are a substitute, but the real version is irreplaceable. Find a dataset that interests you and turn yourself loose on it for a weekend.
'Top customers' is ambiguous. By revenue? Order count? Recency? Lifetime value? The clarifying questions are half the job. Always specify the metric, the time window, and the grouping before opening your editor. Write the assumptions in a comment at the top of your query — it forces clarity and helps reviewers verify your interpretation matches theirs.
Beyond the technical syntax, there's a meta-skill that distinguishes effective SQL users: translating fuzzy business questions into precise queries. A stakeholder asks for 'top customers' — but top by what measure? Revenue? Order count? Recency? Lifetime value? The clarifying questions are part of the job. Practice this by reading vague questions and writing down the assumptions before you write SQL. 'Top customers by total revenue in the past 12 months, excluding refunds, grouped by customer ID' is what 'top customers' actually means once you specify it.
Another meta-skill: knowing when SQL is the wrong tool. Yes, you can do almost anything in SQL, including loops with recursive CTEs and complex statistical calculations with window functions. But sometimes the answer is to dump data to Python or R for analysis that's hard to express in SQL — clustering, machine learning, complex visualization, custom statistical tests. Recognize when you're bending SQL beyond its natural shape. The right architecture for most analytics is SQL for data prep and aggregation, then handoff to a more flexible language for analysis.
Testing your queries is a discipline that beginners skip and experts never skip. Before claiming a query works, verify a few things: does the row count match expectations? Are there NULLs where you didn't expect them? Are there duplicates from JOIN multiplication? Did the aggregation collapse correctly? Run a few hand-checks: pick a specific customer or product and verify the numbers manually for that row. The query that returns 47 numbers, all wrong, looks identical to the query that returns 47 numbers, all correct. The only way to tell the difference is verification.
Schema design is also worth understanding even if you mainly query rather than design. Normalized schemas reduce duplication but require joins for almost every question. Denormalized schemas pre-join data into wider tables, making queries simpler but updates harder. Star schemas in data warehouses separate facts (events, transactions) from dimensions (customers, products, dates). Understanding which kind of schema you're working with shapes how you write queries. The same business question gets a different answer in normalized OLTP versus dimensional OLAP designs.
Reading other people's SQL is a learning shortcut that's easy to skip. When you join a team or a codebase, the existing queries encode years of accumulated wisdom about that data — which joins to use, which filters matter, which edge cases to handle. Read the production queries before writing new ones. You'll learn things about the data that aren't in any documentation. The same applies for public examples — Mode's chart gallery, Stack Overflow answers, and GitHub's BigQuery public dataset queries all contain idioms worth absorbing.
Finally, treat SQL practice as an ongoing skill rather than a finite course. Even senior data engineers and analysts continue learning new patterns. The PostgreSQL release notes alone introduce new optimizations and features every year. New tools (DuckDB, ClickHouse, modern data lake engines) bring their own SQL flavors. Stay curious about what's possible. The analyst who learns to use FILTER clauses or LATERAL joins or recursive CTEs at the right moment outperforms the analyst who learned SQL once and stopped years ago without ever returning to the documentation to see what was added.
Practice resources change over time but the patterns don't. Two decades of SQL practice problems all reduce to the same core skills: filter, aggregate, join, window. Add CTEs for readability and you have most of what working analysts need. Get comfortable with those patterns and you can pick up any new database or dialect in days, because the underlying logic transfers directly. That's why SQL has outlived almost every other technology from the 1970s — the model is just too useful to replace.
SQL Questions and Answers
About the Author
Attorney & Bar Exam Preparation Specialist
Yale Law SchoolJames R. Hargrove is a practicing attorney and legal educator with a Juris Doctor from Yale Law School and an LLM in Constitutional Law. With over a decade of experience coaching bar exam candidates across multiple jurisdictions, he specializes in MBE strategy, state-specific essay preparation, and multistate performance test techniques.