How to Delete Duplicate Rows in Excel: 4 Methods Step-by-Step
How to delete duplicate rows in Excel: 4 methods including Remove Duplicates button, UNIQUE function, Advanced Filter, and Power Query. Examples and tips.

Knowing how to delete duplicate rows in Excel is essential for data cleaning. Whether you're consolidating customer lists, removing duplicate order entries, or cleaning imported data, Excel offers several reliable methods. The fastest is the built-in Remove Duplicates button, but UNIQUE function, Advanced Filter, and Power Query each have specific advantages.
What 'duplicate rows' means. By default in Excel, duplicate rows are rows where every column matches. You can also define duplicates by specific columns — for example, removing rows where only the email matches even if other fields differ. Understanding this distinction is critical to avoid accidentally losing data.
Important note. Always back up your data before removing duplicates. Once removed and saved, deleted rows are gone permanently. There's no Ctrl+Z after closing the file.
This guide covers 4 reliable methods with step-by-step instructions, examples, common mistakes to avoid, and tips for choosing the right approach for your specific scenario.
4 Methods to Delete Duplicates
- Method 1: Remove Duplicates button — Fastest, built-in, destructive
- Method 2: UNIQUE function — Excel 365, dynamic, non-destructive
- Method 3: Advanced Filter — Copies unique to new location, preserves original
- Method 4: Power Query — Best for recurring tasks, automated workflows
- Backup first: Always copy data before destructive operations
- Choose criteria columns: Define what makes a duplicate
- Sort first: Excel keeps first occurrence — sort to control retention
- Standardize data: Clean whitespace, case, formatting before deduping
- Verify results: Count before/after to confirm expected count
- Test on subset: Especially for large datasets
Method 1: Remove Duplicates button. The fastest built-in approach.
Step 1: Select your data range. Click anywhere within your data table — Excel auto-selects connected range. Or manually select specific rows and columns.
Step 2: Go to Data tab. Click 'Remove Duplicates' button in the Data Tools group. Dialog box opens.
Step 3: Choose criteria columns. Dialog shows all columns in your data. Check the columns that determine what counts as a duplicate. Default: all columns checked (entire row must match). Uncheck columns that shouldn't matter for deduplication.
Step 4: Verify 'My data has headers' option. If your first row contains column headers, leave checked. Otherwise uncheck.
Step 5: Click OK. Excel removes duplicate rows immediately. Summary message shows: 'X duplicates removed, Y unique values remain.'
Example: Customer list with duplicate emails. Columns A-D: Name, Email, Phone, City. Multiple rows have same email. Select data. Data → Remove Duplicates. Check ONLY 'Email' column. Click OK. All rows with duplicate emails removed, keeping only first occurrence.
Important: which duplicate is kept. Excel keeps the FIRST occurrence in your data. To control which duplicate stays, sort your data first. To keep newest entries: sort by date descending. To keep most complete entries: sort to put most complete rows first.
Limitations of this method. Destructive — deleted rows gone after save. No undo after closing file. Doesn't show what was removed beyond count. Treats text case as same (apple = APPLE).
When to use this method. One-time cleanup. Quick deduplication. Small to medium datasets. You don't need to preserve original data.

Remove Duplicates Method
Click anywhere in your data table or select range.
Click 'Remove Duplicates' in Data Tools group.
Check columns that identify duplicates.
Verify option matches your data structure.
Excel removes duplicates and shows count.
Destructive — make copy before running.
Method 2: UNIQUE function (Excel 365). Non-destructive and dynamic.
Syntax. =UNIQUE(array, [by_col], [exactly_once]). Returns unique rows or values from an array.
Basic usage. =UNIQUE(A2:D100). Returns unique rows from columns A through D. Results spill to adjacent cells automatically. Original data unchanged.
For single column. =UNIQUE(A2:A100). Returns unique values from column A only.
Example: Customer dedup. Data in A2:D100 (Name, Email, Phone, City). In F2, enter =UNIQUE(A2:D100). Unique rows appear starting at F2. Original A2:D100 unchanged. Use these unique rows as your clean dataset.
Combine with other functions. =SORT(UNIQUE(A2:A100)) returns sorted unique values. =FILTER(UNIQUE(A2:D100), B2:B100="VIP") returns unique VIP customers. =COUNTA(UNIQUE(A2:A100)) counts distinct values.
Exactly-once option. =UNIQUE(A2:A100, FALSE, TRUE). Returns only values that appear EXACTLY ONCE (no duplicates kept at all). Different from deduplication — keeps only true singletons.
Pros of UNIQUE. Non-destructive: original data preserved. Dynamic: updates when source changes. Easy to use. Combines well with other functions. No menu navigation.
Cons of UNIQUE. Excel 365 only (not in older versions). May overlap existing data with #SPILL error. Doesn't move data — creates copy.
Common error. #SPILL! error means the UNIQUE result would overwrite existing cells. Solution: clear cells around the target or move formula to a different location.
When to use UNIQUE. You want to preserve original data. You need dynamic updating. You're using Excel 365. You want to combine with other formulas like SORT and FILTER.
Method Comparison
Fastest. Built-in button. Destructive (deletes original rows). Works in all Excel versions. Best for: one-time cleanup, quick deduplication, when you don't need original data. Steps: Data tab → Remove Duplicates → choose columns → OK.
Method 3: Advanced Filter. Reliable alternative when UNIQUE isn't available.
Step 1: Go to Data tab. Click 'Advanced' in Sort & Filter group. Advanced Filter dialog opens.
Step 2: Choose action. Two options: 'Filter the list, in-place' (hides duplicates) or 'Copy to another location' (creates clean copy). Choose 'Copy to another location' to preserve original.
Step 3: Set list range. Enter source range (e.g., A1:D100). Or click in field and select range manually.
Step 4: Set criteria range (optional). Used for criteria-based filtering. Leave blank for simple deduplication.
Step 5: Set copy-to location. Cell where unique rows should be copied. Click in field, then click target cell (e.g., F1 or another sheet).
Step 6: Check 'Unique records only.' Critical step — this is what actually removes duplicates.
Step 7: Click OK. Unique rows copied to specified location. Original data unchanged.
Example. Customer data in A1:D100. Copy unique to F1. Check 'Unique records only.' Click OK. Unique customer rows appear at F1. Original list intact.
Pros. Non-destructive. Works in all Excel versions. Can copy to different sheet. Preserves original.
Cons. Multiple steps. Less intuitive than other methods. Doesn't update if source changes.
When to use. You need to preserve original data. You can't use UNIQUE (older Excel). You want unique data in a specific location. One-time deduplication.
Advanced Filter Steps
Sort & Filter group. Advanced Filter dialog opens.
Copy to another location (preserves original).
Source data range (with or without headers).
Target cell for unique results.
Critical — enables deduplication.
Unique rows copied. Original preserved.
Method 4: Power Query. Best for recurring tasks and large datasets.
When to use. You'll clean similar data repeatedly. You're importing data from external sources. You need automated workflow. You want clean original data preserved.
Setup. Data tab → From Table/Range. Power Query Editor opens with your data loaded.
Remove duplicates step. Right-click column header → Remove Duplicates. Or Home tab → Remove Rows → Remove Duplicates. Power Query removes duplicate rows.
Multi-column duplicates. Select multiple columns first (Ctrl+click headers), then Remove Duplicates. Treats duplicates as rows matching ALL selected columns.
Close & Load. When done, click Close & Load. Data returns to Excel sheet. Original data preserved in source.
Reusability. Save the query. Refresh anytime source changes. Same cleanup applied automatically. Major time saver for recurring data.
Example: Monthly customer list cleanup. Import file. Apply Remove Duplicates step in Power Query. Save query. Each month: load new file, refresh query, get clean unique customer list automatically.
Pros. Automated repeatable workflow. Handles external data sources. Combines with other transformations (column splits, type changes, filters). Preserves original data. Strong for ETL workflows.
Cons. Learning curve for new users. Overkill for one-time cleanup. Adds complexity for simple tasks.
Tips. Best for recurring imports. Large datasets. Automated reports. Combine with other Power Query transforms.
Power Query in older Excel. Available in Excel 2010+ as add-in, built into Excel 2016+. Verify your version.
Methods Comparison

Common scenarios and which method works best.
Customer email list. Goal: unique emails for marketing. Method: Remove Duplicates by Email column. Steps: select data → Data → Remove Duplicates → check only 'Email' → OK. Tip: lowercase emails first to catch case variations.
Product catalog. Goal: one row per SKU. Method: Remove Duplicates by SKU. Tip: standardize SKU formatting (TRIM whitespace, uppercase) before deduping.
Sales transactions. Goal: report on unique customer-product pairs. Method: UNIQUE with multiple columns. Doesn't modify source data, dynamic.
Survey responses. Goal: unique responses by respondent. Method: Remove Duplicates by respondent ID or email. Verify which records to keep before running.
Database export cleanup. Goal: clean for upload to new system. Method: Power Query with deduplication step. Reusable for recurring exports.
Inventory consolidation. Goal: combine duplicate items, summing quantities. Method: Pivot Table by item, summing quantity column. Different from delete — aggregates.
Audit log dedup. Goal: identify exact event repeats. Method: Remove Duplicates on all columns. Be careful — sometimes legitimate identical events exist.
Multi-file consolidation. Goal: combine multiple Excel files removing duplicates. Method: Power Query consolidation + Remove Duplicates step. Recurring workflow.
Daily customer report. Goal: today's unique customer interactions. Method: Power Query with refresh-on-open. Automated.
Compliance audit cleanup. Goal: unique records for audit. Method: Backup → Conditional Formatting (identify) → Remove Duplicates (delete). Document process.
Use Cases
Customer email cleanup. Method: Remove Duplicates by Email. Lowercase first to catch case variations. 2 minutes for moderate list. Result: unique email contacts for marketing.
Preparing data for clean deduplication.
Standardize text. Apply TRIM to remove leading/trailing whitespace. Use UPPER or LOWER for case consistency. Apply TEXT function for formatting consistency.
Standardize numbers. Verify all numbers stored as numbers (not text). Use VALUE() or *1 to convert text to numbers. Format cells appropriately.
Sort before deduping. Sort data by your retention criteria. Excel keeps the first occurrence — sort to control which one. Common: sort by date descending to keep newest.
Backup. Save copy of original data. Once Remove Duplicates runs and file saved, deleted rows gone.
Identify columns for deduplication. Decide which columns identify a duplicate. Sometimes obvious (email, SKU). Sometimes complex (multiple columns must match).
Handle whitespace. Leading/trailing spaces cause 'apple ' and 'apple' to be different. Always TRIM before deduping.
Handle case. Excel default treats 'Apple' and 'APPLE' as same. If case sensitivity matters, you may need different approach (formula-based dedup).
Handle special characters. Apostrophes, quotation marks, special characters can cause subtle differences. Consider cleaning if matching by text.
Use helper columns. Create helper column with cleaned/standardized version. Dedup on helper column. Then delete helper column.
Validate data types. Number stored as text? Date stored as text? Will cause matching issues. Convert before deduping.
Examine sample. Look at duplicate pairs before deleting. Are they truly duplicates? Subtle differences matter.
Pre-Dedup Prep
=TRIM(A1) removes extra spaces. Common cause of missed dups.
LOWER or UPPER for consistency. Especially emails.
Numbers as numbers, dates as dates. Not text.
Control which duplicate Excel keeps (first occurrence).
Save copy before destructive operations.
Cleaned version for dedup, then delete column.
Common mistakes to avoid when deleting duplicate rows.
Mistake 1: Not backing up first. Remove Duplicates is destructive. Once saved, deleted rows gone permanently. Always backup before running.
Mistake 2: Wrong columns selected. Choosing wrong dedup columns deletes too many or too few rows. Verify which columns actually identify duplicates.
Mistake 3: Forgetting whitespace. 'apple' and 'apple ' (trailing space) treated as different. TRIM first.
Mistake 4: Sort order wrong. Excel keeps first occurrence. If newest entries should be kept, sort by date descending first.
Mistake 5: Mixing data types. Number 123 vs text '123' treated as different. Standardize types first.
Mistake 6: Trusting auto-detection. Excel auto-selects range. Verify it's correct before clicking OK.
Mistake 7: Saving immediately after. If you realize mistake after save, can't recover. Test result first.
Mistake 8: Using on time-series data inappropriately. Sometimes 'duplicates' are legitimate (same event different times). Be intentional.
Mistake 9: Removing legitimate duplicates. Two customers with same name but different emails? Different people. Verify before merging.
Mistake 10: Forgetting to document process. 6 months later you won't remember which method was used. Document or save macro.
1. Always backup first. Copy data to another sheet or save backup file before Remove Duplicates.
2. Verify expected count. Count rows before. Estimate how many duplicates exist. Verify the final count matches expectation.
3. TRIM whitespace. Hidden cause of missed duplicates. Always TRIM before deduping.
4. Choose criteria carefully. Which columns identify a true duplicate? Don't accidentally delete legitimate variations.
5. Sort intentionally. Excel keeps first occurrence. Sort to control which duplicate stays.
6. Test on subset. For large datasets, test methodology on 100 rows first. Verify before running on full data.
7. Document your process. Add notes column or save Power Query for reuse. Future you will thank you.
8. Verify results. Spot-check sample rows after deletion. Confirm expected outcome.
9. Don't trust file save immediately. After saving, deleted rows gone forever. Verify result before saving.
10. Communicate with team. If cleaning shared data, notify team. Avoid confusion.
Finding duplicates before deleting (review approach).
Why review first. Many duplicates are legitimate. Verify which to delete and which to keep. Prevents data loss.
Method 1: Conditional Formatting. Select data → Home → Conditional Formatting → Highlight Cells Rules → Duplicate Values. Excel highlights all duplicates. Visual review. Doesn't delete anything — just identifies.
Method 2: COUNTIF helper column. In new column: =COUNTIF(A:A, A2) returns count of each value. Filter for values > 1 to see duplicates. Use to identify which rows to remove.
Method 3: COUNTIFS for multi-column. =COUNTIFS(A:A, A2, B:B, B2) returns count of identical Name+Email combinations. Filter for > 1 to find duplicates across multiple criteria.
Method 4: Sort and visual scan. Sort by deduplication column. Duplicates appear together. Easy to spot. Manual review.
Method 5: Pivot Table. Insert Pivot Table. Drag deduplication field to Rows. Drag any field to Values (set to Count). Shows count of each unique value. Identifies frequencies.
Strategy: review then dedup. Step 1: Identify duplicates with Conditional Formatting or COUNTIF. Step 2: Review each duplicate set. Step 3: Determine which to keep based on criteria (newest, most complete, etc.). Step 4: Sort by your retention criteria. Step 5: Use Remove Duplicates with confirmed columns.
This approach. Slower but safer. Catches edge cases. Builds confidence in result. Recommended for important data.
Review Approach
Home → Conditional Formatting → Highlight Cells Rules → Duplicate Values. Highlights duplicates visually. Doesn't delete. Best for initial identification.

Edge cases and advanced scenarios.
Case-sensitive duplicates. Default Excel treats 'apple' and 'APPLE' as duplicates. If case matters: convert to lowercase first with helper column, then dedup on cleaned column.
Whitespace-sensitive. ' email@x.com' (leading space) and 'email@x.com' are different. Solution: TRIM all text first.
Number vs text. 123 (number) vs '123' (text) treated as different. Standardize data type first.
Fuzzy matches. 'Smith' vs 'Smyth' — slight variations. Excel doesn't natively handle fuzzy matching. Power Query has approximate match feature. Or use third-party tools.
Conditional duplicates. Want to keep certain duplicates? Filter first, then dedup the rest. Or use formula approach for complex retention rules.
Different rules per column. Some columns case-sensitive, others not. Combine standardization in helper columns before deduping.
Large datasets. 100K+ rows: Power Query handles well. Excel native functions may slow. Test on subset first.
Multi-sheet dedup. Excel native methods work on one sheet. To dedup across sheets: copy data to single sheet first, then dedup. Or use Power Query combine sheets feature.
External data sources. Importing from CSV, database, web? Use Power Query — handles imports + dedup in single workflow.
Live data. Source updates frequently? UNIQUE function dynamic — updates as source changes. Power Query: refresh manually.
Inferred duplicates. Two rows representing same entity but different data. Manual review essential. Cannot fully automate detection.
Edge Cases
Convert to lowercase first. Use helper column.
TRIM all text before dedup. Common cause of missed dups.
Numbers vs text different. Standardize types first.
Power Query approximate match. Or third-party tools.
Power Query for 100K+ rows. Test on subset.
Combine in single sheet first. Or use Power Query.
Verifying your dedup worked correctly. Check row count before and after using =COUNTA(A:A). Spot-check the first 10-20 rows post-dedup to verify uniqueness by your criteria. Confirm specific known duplicates are gone. Use Pivot Tables to group and verify each value appears once.
Building reusable workflows. Save Excel templates with established dedup workflow. Convert ranges to Excel Tables (Ctrl+T) for better structure. Combine methods: Conditional Formatting to identify → review → Remove Duplicates on confirmed range — safer than blind deletion. Document standardization rules so team applies consistently. Keep audit log: original count, removed count, final count for compliance and debugging. Save multiple versions so you can roll back if needed.
Best Practices
Save workflow for reuse. Power Query reusable.
Ctrl+T to convert. Better structure for dedup.
Conditional Format → review → Remove Duplicates.
What's a duplicate? Define for consistency.
Count, spot-check, compare to expected.
Multiple file versions. Roll back if needed.
Common questions about deleting duplicate rows in Excel.
Can I undo Remove Duplicates? Yes, immediately with Ctrl+Z. But once you save the file, the deleted rows are gone permanently. Always backup before running.
Does Remove Duplicates keep the first or last occurrence? First occurrence in your data. To keep newest entries, sort by date descending before running. To keep oldest, sort ascending.
Can I dedup based on specific columns only? Yes. In the Remove Duplicates dialog, check only the columns that identify duplicates. Other columns are ignored for matching but kept in retained rows.
Does Excel treat 'apple' and 'APPLE' as duplicates? Yes — by default, Remove Duplicates is case-insensitive. If case matters, standardize text first using helper column with LOWER() or UPPER().
How do I find duplicates without deleting them? Use Conditional Formatting → Highlight Cells Rules → Duplicate Values. Or COUNTIF formula in helper column: =COUNTIF(A:A, A2) shows count for each row. Filter for > 1 to see duplicates.
Why are 'apple ' and 'apple' treated as different? Leading/trailing whitespace makes them different. Always TRIM text before deduplicating. =TRIM(A1) removes whitespace.
Does Remove Duplicates work with filtered data? It works on the underlying data, ignoring filter state. So rows hidden by filter still get evaluated. Be careful with filtered views.
Can I undo if I close the file? No. Once saved and closed, deleted rows are gone. Always work on a copy or backup file for dedup operations.
Can I dedup across multiple sheets? Not directly. Combine data into single sheet first, then dedup. Or use Power Query to merge sheets and dedup in one step.
What about partial duplicates (some fields match, others don't)? Decide which fields identify a duplicate. Check only those columns in Remove Duplicates. Other columns will be kept but won't determine matching.
How Pros and Cons
- +How has a publicly available content blueprint — you know exactly what to prepare for
- +Multiple preparation pathways accommodate different schedules and budgets
- +Clear score reporting shows specific strengths and weaknesses
- +Study communities share current insights from recent test-takers
- +Retake policies allow recovery from a difficult first attempt
- −Tested content scope requires substantial preparation time
- −No single resource covers everything optimally
- −Exam-day performance can differ from practice test performance
- −Registration, prep, and retake costs accumulate significantly
- −Content changes between versions can make older materials less reliable
Excel Questions and Answers
Final thoughts. Knowing how to delete duplicate rows in Excel is a fundamental data-cleaning skill. With four reliable methods available, you can handle any scenario from quick one-time cleanup to recurring automated workflows.
Start with the basics. Remove Duplicates button handles 90% of cases in 30 seconds. Master this method first, then add others as you encounter different scenarios.
Use review methods for important data. Conditional Formatting and COUNTIF help you identify duplicates before deletion. Safer than blind delete. Better for data you can't recover.
Standardize before deduping. Whitespace, case differences, formatting variations cause missed duplicates. Always TRIM, standardize case, verify data types before running Remove Duplicates.
Backup always. Destructive operations are not undoable after save. Take 2 seconds to copy your data first. Prevents data loss.
Build templates for recurring work. Power Query workflows save time on monthly cleanup. Reusable, automated, refreshes with new data.
Document your process. Future you (and colleagues) will appreciate notes on what was removed, why, and how. Especially important for compliance and audit data.
Master multiple methods. Different scenarios benefit from different approaches. Knowing Remove Duplicates, UNIQUE, Advanced Filter, and Power Query makes you a complete Excel power user.
Data quality is foundational. Clean unique data drives better decisions, better reports, better outcomes. Investing time in deduplication pays back many times over. Make it a habit, not an afterthought.
About the Author
Attorney & Bar Exam Preparation Specialist
Yale Law SchoolJames R. Hargrove is a practicing attorney and legal educator with a Juris Doctor from Yale Law School and an LLM in Constitutional Law. With over a decade of experience coaching bar exam candidates across multiple jurisdictions, he specializes in MBE strategy, state-specific essay preparation, and multistate performance test techniques.