SQL Cheat Sheet: Essential Commands and Syntax Reference

SQL cheat sheet: SELECT, JOIN, WHERE, GROUP BY syntax. Common operations, joins explained, aggregate functions, and database query examples.

SQL Cheat Sheet: Essential Commands and Syntax Reference

SQL (Structured Query Language) is the standard language for working with relational databases — used by virtually every data professional, software developer, business analyst, and many other roles working with data. A good SQL cheat sheet provides quick reference for syntax, commands, and patterns that appear regularly in database work. While SQL has substantial depth that takes years to fully master, the basic command vocabulary covering 80% of routine operations can be learned in days. This reference covers the essential SQL commands and patterns most commonly used across various database systems including MySQL, PostgreSQL, SQL Server, Oracle, and SQLite.

The SQL standard defines a common syntax across database systems but each implementation has variations. ANSI SQL provides the foundation, with extensions added by individual database vendors for system-specific features. Most basic queries work across systems with minimal modification, but advanced features often require system-specific syntax. Understanding which constructs are standard versus vendor-specific helps when moving between database systems or writing portable code. This reference focuses on broadly applicable SQL with notes about specific variations where relevant.

SQL Cheat Sheet Quick Reference

SELECT: Retrieves data from tables. WHERE: Filters rows. JOIN: Combines tables. GROUP BY: Aggregates by groups. ORDER BY: Sorts results. INSERT/UPDATE/DELETE: Modifies data. CREATE/ALTER/DROP: Manages schema. Common functions: COUNT, SUM, AVG, MIN, MAX, CONCAT, SUBSTRING, COALESCE, CASE. Standard: ANSI SQL with vendor-specific extensions.

The SELECT statement is the workhorse of SQL — used for nearly all data retrieval operations. Basic syntax: SELECT column1, column2 FROM table WHERE condition ORDER BY column LIMIT n. The * wildcard selects all columns: SELECT * FROM table. DISTINCT eliminates duplicate rows: SELECT DISTINCT column FROM table. Column aliases use AS: SELECT column AS new_name FROM table. The standard column ordering in SELECT statements lists columns first, then specifies the source table(s), filtering conditions, grouping, sorting, and limits. This logical structure persists across all SQL queries regardless of complexity.

Essential Sql Commands - SQL - Structured Query Language certification study resource

Essential SQL Commands

SELECT

Retrieve data: SELECT col1, col2 FROM table. Add WHERE for filtering, ORDER BY for sorting, LIMIT for row limits.

INSERT

Add rows: INSERT INTO table (col1, col2) VALUES (val1, val2). Or INSERT INTO table SELECT ... for copying data.

UPDATE

Modify existing rows: UPDATE table SET col1 = val1 WHERE condition. Always include WHERE to avoid updating all rows.

DELETE

Remove rows: DELETE FROM table WHERE condition. Always include WHERE to avoid deleting entire table contents.

CREATE TABLE

Define new table: CREATE TABLE name (col1 INT, col2 VARCHAR(255)). Specify data types and constraints.

ALTER TABLE

Modify table structure: ALTER TABLE name ADD/DROP COLUMN, ADD INDEX, etc. Schema changes.

DROP TABLE

Delete table entirely: DROP TABLE name. Permanent — backup important tables before dropping.

INDEX

Speed up queries: CREATE INDEX name ON table(column). Indexes accelerate WHERE and JOIN operations.

JOIN operations combine data from multiple tables — fundamental to relational database queries. INNER JOIN returns only matching rows from both tables. LEFT JOIN returns all rows from left table plus matching rows from right table (with NULLs where no match exists). RIGHT JOIN reverses this. FULL OUTER JOIN returns all rows from both tables. CROSS JOIN produces Cartesian product (every row from one table combined with every row from other). Join syntax: SELECT cols FROM table1 JOIN table2 ON table1.col = table2.col. Multiple joins chain together: FROM table1 JOIN table2 ON ... JOIN table3 ON ...

WHERE clause filters rows based on conditions. Comparison operators include = (equal), != or <> (not equal), <, >, <=, >=. Logical operators AND, OR, NOT combine conditions. Special operators include IN (matches any value in list), NOT IN, BETWEEN (range check), LIKE (pattern matching with % and _ wildcards), IS NULL, IS NOT NULL. Examples: WHERE age >= 18, WHERE status IN ('active', 'pending'), WHERE name LIKE 'Sm%', WHERE phone IS NOT NULL. Combining conditions: WHERE (age >= 18 AND status = 'active') OR special_case = 1.

GROUP BY enables aggregate analysis by grouping rows sharing common values. Used with aggregate functions COUNT, SUM, AVG, MIN, MAX. Syntax: SELECT category, COUNT(*) FROM table GROUP BY category. HAVING filters grouped results (different from WHERE which filters before grouping): SELECT category, COUNT(*) FROM table GROUP BY category HAVING COUNT(*) > 10. The order matters: WHERE filters individual rows, GROUP BY creates groups, HAVING filters groups, ORDER BY sorts results, LIMIT restricts row count.

Common SELECT patterns:
SELECT * FROM users; — All columns, all rows
SELECT name, email FROM users WHERE active = 1; — Specific columns, filtered
SELECT COUNT(*) FROM orders; — Count rows
SELECT * FROM users ORDER BY created_at DESC LIMIT 10; — Latest 10 users
SELECT category, COUNT(*) FROM products GROUP BY category; — Count by category
SELECT DISTINCT country FROM customers; — Unique values
SELECT * FROM users WHERE name LIKE 'A%'; — Names starting with A

Aggregate functions perform calculations across rows. COUNT(*) counts all rows. COUNT(column) counts non-null values. SUM(column) totals numeric values. AVG(column) calculates average. MIN(column) and MAX(column) return smallest and largest values. These functions can apply to entire query results or to GROUP BY groups. Window functions (newer SQL feature) provide aggregates while preserving individual rows: SELECT name, salary, AVG(salary) OVER (PARTITION BY department) FROM employees calculates department average for each row.

String functions handle text manipulation. CONCAT(str1, str2) joins strings. SUBSTRING(str, start, length) extracts portions. UPPER and LOWER change case. TRIM removes whitespace. LENGTH returns string length. REPLACE substitutes substrings. Different databases have variations — MySQL uses CONCAT, SQL Server uses + operator or CONCAT_WS, PostgreSQL has its own functions. Date/time functions include CURRENT_DATE, NOW(), DATE_ADD, DATE_DIFF, EXTRACT for getting date parts. Date manipulation varies substantially across databases.

Subqueries embed queries within other queries. SELECT * FROM users WHERE id IN (SELECT user_id FROM orders WHERE total > 100) finds users with large orders. Subqueries can appear in WHERE, FROM, or SELECT clauses. Correlated subqueries reference outer query columns: SELECT name, (SELECT COUNT(*) FROM orders WHERE user_id = u.id) AS order_count FROM users u. CTEs (Common Table Expressions) provide cleaner syntax for complex queries: WITH ranked_users AS (SELECT *, ROW_NUMBER() OVER (ORDER BY signup_date) FROM users) SELECT * FROM ranked_users WHERE row_num <= 10.

Sql - Structured Query Language - SQL - Structured Query Language certification study resource

Indexes dramatically accelerate query performance. CREATE INDEX idx_email ON users(email) speeds up queries filtering or joining on email. Without indexes, databases must scan entire tables for each query — slow on large tables. Primary keys automatically have indexes. Foreign keys often benefit from indexes. WHERE and JOIN conditions on indexed columns run much faster than on unindexed columns. However, indexes slow down INSERT/UPDATE/DELETE operations because indexes must be maintained alongside row changes. Strategic indexing balances query performance with write performance.

Transactions ensure data consistency across multiple operations. BEGIN starts a transaction, COMMIT saves changes, ROLLBACK undoes changes. ACID properties (Atomicity, Consistency, Isolation, Durability) define transactional guarantees. Transactions matter for operations that involve multiple updates that must succeed or fail together — money transfers must debit one account and credit another atomically. Most database operations should use transactions even for simple updates to enable rollback if errors occur. Default behavior varies by database — some auto-commit each statement, others require explicit COMMIT.

NULL handling deserves specific attention because NULL behaves differently from empty string or zero. NULL means "unknown" or "absent" — comparisons with NULL produce NULL (not true or false). Use IS NULL and IS NOT NULL for null checking, never = NULL or != NULL. COALESCE returns first non-null value: SELECT COALESCE(nickname, name, 'Guest') FROM users. NULL aggregation: COUNT(*) counts all rows; COUNT(column) excludes NULLs; SUM, AVG ignore NULLs. Understanding NULL behavior prevents subtle bugs in queries.

Common SQL Performance Tips

  • Index columns used in WHERE, JOIN, and ORDER BY
  • Use specific column names instead of SELECT *
  • Filter early with WHERE before joining when possible
  • Use EXISTS instead of IN for subqueries on large tables
  • Avoid functions on indexed columns in WHERE (defeats indexes)
  • Use LIMIT for queries returning many rows when needed
  • Run EXPLAIN to see query plan and identify problems
  • Test queries on representative data before production

Query optimization through EXPLAIN reveals how databases execute queries. Different databases have different EXPLAIN syntax (EXPLAIN, EXPLAIN PLAN, EXPLAIN ANALYZE) but all show the execution plan including table scan order, index usage, and estimated row counts. Performance problems often involve full table scans where indexes should be used, joins ordered inefficiently, or unnecessary sorting. Reading EXPLAIN output requires practice but produces dramatic performance improvements when problems are identified. Slow queries on production databases often respond well to careful analysis using EXPLAIN.

Common SQL anti-patterns produce slow or incorrect queries. SELECT * returns more data than usually needed; specifying columns reduces network transfer. Functions on indexed columns (WHERE LOWER(name) = 'john') prevent index usage; alternatives include functional indexes or storing pre-computed values. OR conditions sometimes prevent index usage; rewriting as UNION can help. Subquery in SELECT for each row often runs slower than single JOIN; consider whether subquery is necessary. NOT IN with NULL values produces unexpected results; NOT EXISTS often safer.

Schema design fundamentals affect query performance and maintenance. Normalize data appropriately — avoid duplicating information across tables when relationships can capture it. Denormalize selectively for query performance when appropriate, accepting redundancy in exchange for simpler/faster queries. Use appropriate data types — INTEGER vs BIGINT, VARCHAR(255) vs TEXT all have implications. Add foreign keys to enforce referential integrity. Set NOT NULL constraints where appropriate to prevent missing-data bugs. Document schema decisions through comments or external documentation.

For new SQL learners, structured progression supports skill development. Start with basic SELECT queries on simple single-table operations. Progress to WHERE filtering and ORDER BY sorting. Add JOIN operations across multiple tables. Learn aggregation with GROUP BY. Master subqueries and CTEs. Develop comfort with window functions. Each stage builds on previous knowledge. Practice with real databases (try free options like SQLite or PostgreSQL) rather than just reading about syntax. Building actual queries against real data accelerates learning more than passive study.

For SQL practitioners advancing skills, several areas reward continued learning. Window functions handle complex analytical patterns elegantly. CTEs improve readability of complex queries. Recursive CTEs solve hierarchical problems. JSON data type support (in PostgreSQL, MySQL, others) enables semi-structured data. Pivot operations transform data shape. Regular expressions for advanced pattern matching. Advanced indexing strategies including partial and functional indexes. Database-specific features like PostgreSQL's array types or MySQL's spatial functions. Each adds capabilities valuable for specific use cases.

Database administration topics extend beyond pure SQL into operational concerns. Backup and restore procedures protect against data loss. Replication enables high availability. Performance monitoring identifies bottlenecks before they impact users. Security including authentication, authorization, and data encryption protects sensitive data. Capacity planning ensures adequate resources. Major database systems have substantial administration depth that DBA-track careers explore. Application developers benefit from basic familiarity with these topics even if they don't manage databases directly.

Sql Quick Stats - SQL - Structured Query Language certification study resource

SQL Quick Stats

1970sWhen SQL was developed at IBM
ANSI SQLStandard governing SQL across databases
5+Major databases (MySQL, PostgreSQL, SQL Server, Oracle, SQLite)
80%Approx. routine work using basic command vocabulary

Major Database Systems

MySQL

Open source, widely used in web applications. Owned by Oracle. Strong community support. Forks include MariaDB.

PostgreSQL

Open source, advanced features. Strong standards compliance. Often preferred for complex applications.

SQL Server

Microsoft's enterprise database. Strong Windows integration, T-SQL extensions. Multiple editions available.

Oracle

Enterprise-focused, expensive. Powerful features for complex enterprise applications. PL/SQL extensions.

SQLite

Embedded database, single file. No server required. Common in mobile apps, testing, simple applications.

Cloud Databases

AWS RDS/Aurora, Azure SQL, Google Cloud SQL. Managed services hosting the above engines plus cloud-native options.

Career paths leveraging SQL skills span many roles across the technology industry. Database administrators (DBAs) manage database systems with deep operational expertise. Data analysts use SQL for business intelligence and reporting. Data engineers build data pipelines using SQL alongside other tools. Application developers write SQL embedded in their code. Business analysts query databases for reporting. Quality assurance engineers test database functions. Each role uses SQL differently — DBAs focus on administration; analysts focus on analytical queries; developers focus on integration with applications. Strong SQL skills support career progression across these various paths.

For students considering data-related careers, SQL proficiency is essentially mandatory. Job postings for data analysts, data scientists, business analysts, and many other roles list SQL as required skill. Programming bootcamps and data science programs include SQL fundamentals. Online courses through platforms like Coursera, DataCamp, Udemy, and many others teach SQL from beginner to advanced levels. Free resources include SQL tutorials at W3Schools, official documentation for various databases, and many YouTube channels covering SQL topics. The investment in SQL skills produces career-spanning value.

Looking forward, SQL continues evolving despite being a 50-year-old technology. New database systems often add SQL interfaces for compatibility. Cloud database services make SQL more accessible than ever. NoSQL databases initially competed with SQL but increasingly add SQL-like query languages. NewSQL databases combine relational benefits with horizontal scalability. Modern data warehouses (Snowflake, BigQuery, Redshift) use SQL extensively. The fundamental concepts of SQL remain valuable across decades of database technology evolution. Students learning SQL today are investing in skills that will remain relevant throughout their careers.

Common interview questions for SQL-related roles assess candidates' practical query construction abilities. Typical questions include writing queries against given schema with multiple tables, finding duplicate records, identifying customers without orders, calculating running totals, ranking within groups using window functions, pivoting data across rows and columns, finding gaps in sequences, and similar pattern-based challenges. Practicing these common patterns through resources like LeetCode SQL problems, HackerRank SQL challenges, or specialized SQL interview prep books builds confidence for technical interviews. Most candidates need explicit interview preparation beyond general SQL knowledge to perform well in the time-pressured environment.

SQL injection attacks remain critical security concern for applications using SQL. When user input is concatenated directly into SQL queries, malicious input can manipulate the resulting SQL to access unauthorized data, modify records, or even drop tables. The defense is parameterized queries (prepared statements) where SQL structure is fixed and user input becomes parameter values rather than SQL fragments.

Modern application frameworks support parameterized queries through their database access libraries. Despite well-known defenses, SQL injection vulnerabilities continue appearing in real applications, often due to careless code reviews or legacy code with concatenated SQL. Awareness of injection risks supports writing safer code from the beginning.

SQL Productivity Tips

EXPLAINCommand to analyze query performance
10x+Speedup possible from proper indexing
ACIDTransaction guarantees (Atomicity, Consistency, Isolation, Durability)
MultipleDaily SQL queries written by typical data analyst

For data analysts working primarily with SQL, several patterns appear repeatedly across analytical work. Time-series analysis using date functions to group by week, month, quarter, year. Cohort analysis tracking user behavior over time using PARTITION BY. Funnel analysis measuring conversion through sequential events using JOIN or window functions. Retention analysis examining return rates over time periods. RFM (Recency, Frequency, Monetary) analysis for customer segmentation. Each of these analytical patterns has standard SQL implementations that analysts master through repetition. Building a personal library of analytical query templates accelerates work on similar problems over time.

Database documentation is often overlooked but valuable. Schema documentation describing table purposes, column meanings, relationships, and business rules supports query writing across teams. Sample queries demonstrating common patterns help new team members. Performance notes documenting query optimization considerations save time when issues arise. Modern tools like dbt (data build tool) integrate documentation with SQL pipelines. Some companies maintain internal data catalogs with comprehensive schema documentation. Regardless of specific tools, well-documented databases support more efficient analytical and development work over time.

Modern SQL extensions in major databases continue expanding capabilities. PostgreSQL particularly has extensive functionality including arrays, JSON, geographic data, full-text search, and various others. SQL Server includes T-SQL extensions for procedural programming, stored procedures, triggers, and many enterprise features. MySQL has its own extensions including JSON support and various performance features. Snowflake, BigQuery, and other modern data warehouses extend SQL with cloud-specific features. Vendor-specific features create some portability challenges but enable powerful capabilities within each system. Awareness of which features are standard versus vendor-specific helps when planning multi-database deployments or anticipating future migrations.

Looking forward, AI-assisted SQL writing tools (text-to-SQL, code completion) increasingly support analyst productivity. Tools generating SQL from natural language questions help non-technical users access data. AI code completion in modern IDEs accelerates SQL writing for technical users. These tools work best for common patterns; complex analytical queries still require human SQL expertise. The combination of human SQL skills with AI assistance produces better outcomes than either alone — analysts using AI tools strategically maintain productivity advantages over those relying purely on either approach.

Learning SQL: Pros and Cons

Pros
  • +Mandatory skill for most data-related careers
  • +Decades-old technology with stable fundamentals
  • +Standardized across many database systems
  • +Free databases (PostgreSQL, MySQL, SQLite) for practice
  • +Substantial online resources for learning
  • +Continued relevance in cloud and modern data stacks
Cons
  • Vendor-specific extensions create non-portable code
  • Performance optimization is complex specialty
  • NULL handling and other quirks confuse beginners
  • Schema design decisions affect long-term maintenance
  • Database administration is separate complex specialty
  • Some advanced features available in only specific databases

SQL Questions and Answers

About the Author

James R. HargroveJD, LLM

Attorney & Bar Exam Preparation Specialist

Yale Law School

James R. Hargrove is a practicing attorney and legal educator with a Juris Doctor from Yale Law School and an LLM in Constitutional Law. With over a decade of experience coaching bar exam candidates across multiple jurisdictions, he specializes in MBE strategy, state-specific essay preparation, and multistate performance test techniques.