How to Remove Duplicate Rows in Excel: 5 Best Methods
Remove duplicate rows in Excel five ways: Remove Duplicates dialog, UNIQUE function, Advanced Filter, COUNTIF, Power Query. Step-by-step with screenshots.

Duplicate rows mess with totals. They skew counts, double-charge customers in invoice lists, and turn clean reports into noise. If you've ever stared at a spreadsheet wondering why your revenue figure looked 12% too high, odds are good a copy-paste hiccup left twin rows hiding in the middle of your data — and yes, you'll need to hunt them down.
Excel gives you five honest ways to deal with this. The built-in Data > Remove Duplicates dialog is the fastest. The UNIQUE function in Excel 365 returns a clean list dynamically, so the source can change and the output keeps up. Advanced Filter copies unique records to a new location without touching the original. COUNTIF flags repeats so you can review before deleting. And Power Query handles case-insensitive matching, whitespace, and refreshable workflows that the dialog box can't touch.
Pick the wrong tool and you'll either nuke records you wanted to keep or leave dupes lurking because of an extra space or a stray capital letter. This guide walks through each method, when to reach for it, and the pitfalls that trip up most people the first three times they try. Short version: never run a destructive method on data you haven't backed up first.
Excel Dedup Methods at a Glance
Before you click anything, sort out what counts as a duplicate. In Excel's world, two rows match only if every selected column contains an identical value. A trailing space in one cell. A capital "S" instead of lowercase. A leading zero stripped by a previous paste. Any of these and the dialog treats the rows as unique — and they stay.
That's why pros normalize first. Run TRIM() to strip spaces. Use LOWER() or UPPER() to flatten case. Convert text-numbers to actual numbers with VALUE(). Only then do you start removing rows. Skip this prep and you'll get half the cleanup you expected, then spend the next twenty minutes wondering why "john@email.com" and "John@Email.com" both survived.

Normalize Before You Dedupe
TRIM, LOWER, and VALUE are your three friends. Run them in helper columns, paste-as-values back over the originals, then dedupe. Skip this step and Excel will leave half your duplicates behind because of trailing spaces and stray capitals — and you'll blame the dialog for being broken when the data is the actual culprit.
The Remove Duplicates dialog ships with every modern version of Excel — desktop, web, even Excel for Mac. You select your range, head to the Data tab, and click Remove Duplicates. Excel pops a dialog listing every column in your selection. Tick the columns that define uniqueness, click OK, and Excel deletes the duplicate rows in place.
Speed is the selling point here. A million-row table cleans up in seconds. The catch? It's destructive. The rows are gone the moment you click OK, and Ctrl+Z is your only safety net. Close the file without saving and the undo history goes with it. Save first. Always.
The Five Methods Compared
Fastest. Destructive. Data tab, one click, done. Best for one-off cleanups on clean data.
Dynamic. Non-destructive. Excel 365 only. Spills a fresh array; refreshes when source changes.
Copies unique records to a new location. Works on every Excel version. Static result.
Marks duplicates without deleting. Review before destroying. Pairs well with sort/filter.
Refreshable. Case-insensitive. Handles repeated cleanups on imports that keep coming back.
Here's where most people stumble: which columns do you actually tick? If you select only the Email column on a customer list, Excel keeps the first row it finds and discards every subsequent row sharing that email — even if the name, address, and order total differ. That might be what you want (one row per customer). Or it might destroy half your transaction history. Stop and think before you tick.
The dialog tells you exactly what happened: "X duplicate values found and removed; Y unique values remain." Read the count. If the math looks wrong, undo immediately and reconsider which columns define a match.

Method Walkthroughs
Select your data range. Click Data > Remove Duplicates. In the dialog, untick everything, then tick only the columns that define uniqueness. Click OK. Excel reports how many were removed and how many remain. Save before you do it, because undo only works while the file's open.
UNIQUE is the modern way to handle this. Drop =UNIQUE(A2:A1000) in an empty cell and Excel spills a dynamic array of every distinct value from the range. Add the source data, the formula updates. Delete a row, it updates again. Nothing destructive happens to the original.
UNIQUE accepts three arguments. The first is your range. The second is a logical telling Excel whether to compare by columns (TRUE) or by rows (FALSE — the default). The third is exactly_once, which when set to TRUE returns only items that appear once and ignores everything else. Useful for finding values that shouldn't have duplicates and don't.
The function requires Excel 365 or Excel 2021. Older versions choke on it. If you're stuck on Excel 2019, you'll need Advanced Filter or a helper-column trick — both covered below.
If you're on Excel 2019 or earlier, UNIQUE returns a #NAME? error. You have two options. Either upgrade to a Microsoft 365 subscription for full dynamic array support, or fall back to Advanced Filter for the same non-destructive result. Don't waste an hour googling syntax — the function literally doesn't exist in older builds.
What if you want unique rows, not just unique values from one column? Wrap your whole range: =UNIQUE(A2:D1000). Excel spills a 4-column array where every row is distinct across all four columns. That's the dynamic equivalent of the Remove Duplicates dialog, except your source stays untouched and the result refreshes automatically.
Pair UNIQUE with SORT and FILTER for a fully dynamic report. =SORT(UNIQUE(FILTER(A2:D1000, B2:B1000="Active"))) returns sorted unique rows where column B equals "Active". One formula, zero manual cleanup, refresh-proof. This is the workflow that's quietly replacing pivot tables for a lot of analysts.
Advanced Filter is the old-school way, and it's still useful when you want unique records copied to a different location without touching the source. Find it on the Data tab under Sort & Filter > Advanced. Pick "Copy to another location". Set your List range. Leave Criteria blank. Tick "Unique records only". Pick a destination cell. Click OK.
Excel copies every unique row to your destination, leaves the source intact, and you can verify the result before deleting anything. The downside is it's static — change the source and the copy doesn't update. For one-off cleanups on Excel 2019 or earlier, it's the most reliable non-destructive option. For ongoing reports, UNIQUE or Power Query wins.

Pre-Dedup Cleanup Checklist
- ✓Save the workbook or duplicate the sheet — give yourself an undo path beyond Ctrl+Z.
- ✓Run TRIM() in a helper column and paste values back to strip stray spaces.
- ✓Apply LOWER() or UPPER() if case shouldn't matter for matching.
- ✓Convert text-numbers with VALUE() so '123' and 123 don't both survive.
- ✓Decide exactly which columns define a duplicate before opening the dialog.
- ✓Run a COUNTIF helper first to preview what would be deleted.
- ✓Verify the count in Excel's confirmation message looks reasonable.
- ✓Spot-check a few removed candidates against your backup before saving.
Sometimes you want to see duplicates before you destroy them. COUNTIF is your friend. In a helper column next to your data, drop =COUNTIF($A$2:$A$1000, A2). Every row gets a count. Rows with a value of 1 are unique. Anything 2 or higher is a duplicate.
Sort or filter the helper column to inspect what's about to be wiped out. You might find that the "duplicates" are actually legitimate repeat customers, or that a leading apostrophe is making "00123" and "123" look identical when they shouldn't be. Once you're satisfied, filter the helper for values >1, delete those rows, then delete the helper column.
For multi-column matching, use COUNTIFS. =COUNTIFS($A$2:$A$1000, A2, $B$2:$B$1000, B2) returns the count of rows where both column A and column B match the current row. Adjust the column count to fit your uniqueness definition. Pricier than the dialog in setup time, but you keep total visibility.
Power Query is the heavyweight option, and it earns its weight. Built into Excel 2016+, it sits on the Data tab under Get & Transform Data. Click From Table/Range. The Power Query editor opens with your data loaded. Select the columns that define uniqueness, right-click any column header, choose Remove Duplicates, and click Close & Load.
The killer feature is case-insensitivity. Power Query — unlike every other method here — treats text comparisons as case-insensitive by default. "John" and "JOHN" collapse into one row without any pre-cleaning. Whitespace is still significant, so you'll want a Trim step first, but case is handled automatically.
The second killer feature is refreshability. The query saves with the workbook. Drop new data into the source table, hit Data > Refresh All, and Power Query re-runs every step in order — trim, lowercase, dedupe, load. Set it up once. Refresh forever. This is the right answer for any cleanup you'll do more than twice.
Dialog vs UNIQUE vs Power Query
- +Dialog is the fastest for one-off cleanups — three clicks and done.
- +UNIQUE refreshes automatically when source data changes — set and forget.
- +Power Query handles case-insensitivity natively without any pre-cleaning.
- +Power Query queries save with the workbook and refresh on demand.
- +UNIQUE composes beautifully with SORT, FILTER, and other dynamic array functions.
- −Dialog is destructive — undo only works in the current session.
- −UNIQUE doesn't exist in Excel 2019 or earlier, so legacy users are stuck.
- −Power Query has a learning curve — the M language feels alien at first.
- −Advanced Filter results are static and don't refresh with the source.
- −COUNTIF slows down dramatically on ranges over 100,000 rows.
One more trick worth knowing: Conditional Formatting can highlight duplicates visually before you delete anything. Select your range, go to Home > Conditional Formatting > Highlight Cells Rules > Duplicate Values. Excel paints every cell that appears more than once in red (or any color you pick). It's a quick visual scan before committing to a destructive action.
The catch is that built-in Duplicate Values rule operates one column at a time. For multi-column duplicate detection, you'll need a formula-based rule with COUNTIFS. Slightly more setup, but it scales to any uniqueness definition you need.
For very large datasets — think hundreds of thousands of rows or more — performance matters. The Remove Duplicates dialog handles a million rows in a few seconds on a modern laptop. UNIQUE on a million-row source can briefly hang Excel while the dynamic array recalculates, especially if you've stacked SORT and FILTER on top. COUNTIF helper columns scale poorly: every cell recalculates against the full range, so a 500,000-row helper column means 500,000 squared comparisons. Switch to Power Query for anything above 100,000 rows and you'll feel the difference immediately.
If your data lives in a real Excel Table (Ctrl+T to convert), every method runs slightly faster because Excel knows the exact bounds and doesn't have to guess where the data ends. Tables also auto-expand when you add new rows, so your formulas and queries stay in sync without manual range updates. Convert your range to a Table before doing anything else. It's a one-keystroke productivity multiplier most people skip for years before discovering it.
Whitespace and case differences are the two reasons people swear Remove Duplicates is broken. It isn't. The data is dirty. Run =TRIM(A2) in a helper column to strip leading, trailing, and double-internal spaces. Run =LOWER(A2) to flatten case. Paste the helper values back over the originals (Paste Special > Values) and then dedupe.
Numbers stored as text are another classic. =ISNUMBER(A2) returns FALSE for anything that looks numeric but isn't. Convert with =VALUE(A2), paste back as values, and your dedupe will finally catch the "123" / 123 collision. None of this is glamorous, but ten minutes of prep saves an hour of re-cleaning.
A common variation: what if you want to remove rows where any column contains a duplicate, rather than rows where the full row repeats? That's a different question, and the standard Remove Duplicates dialog won't answer it directly. Use a helper column with COUNTIF against the suspect column, then filter and delete. Or pivot the approach in Power Query — Group By on the suspect column, count rows, and merge back to filter out anything with a count greater than one. Both work; pick whichever you'll remember next month.
Keyboard shortcuts speed things up further. Alt + A + M opens Remove Duplicates from anywhere in the Data tab — that's the legacy 2003 shortcut Excel kept around. Ctrl + Shift + L toggles AutoFilter so you can preview groups before deleting. Ctrl + T converts a range into a Table. Learn those three and your hands stop hopping between mouse and keyboard mid-task. Small change, real time saved across a long week.
One advanced pattern worth knowing: keeping the latest record when duplicates exist. The dialog keeps the first occurrence it encounters, which is rarely what you want for transaction logs. Sort your data descending by timestamp first, then run Remove Duplicates on the customer-ID column only. Excel keeps the first row of each group, which after the sort is the most recent. Five seconds of sort plus one click of Remove Duplicates beats writing a 30-line VBA macro for the same job.
Backups, always. Before any destructive operation, copy the sheet (right-click the tab > Move or Copy > tick "Create a copy"). Now you have an original to fall back on if the dialog deletes more than you expected. Or save the file under a new name. Or paste the data into a fresh sheet and dedupe that. Pick whichever feels natural — just have something to revert to.
If you're working in a shared workbook on OneDrive or SharePoint, Version History is your safety net. Right-click the file in OneDrive, pick Version History, and every save is restorable. Comforting on a Friday afternoon when you've just nuked the wrong column.
Pick your method based on three questions. First, do you need the original data preserved? If yes, use UNIQUE, Advanced Filter, or Power Query. If no, the Remove Duplicates dialog is fastest. Second, will you run this cleanup more than once? If yes, Power Query — refresh wins every time. If no, the dialog or UNIQUE is fine. Third, does case or whitespace matter? If yes, Power Query handles it natively; everything else needs prep.
For a one-off cleanup on tidy data, click Data > Remove Duplicates and move on. For dynamic reports that update with the source, use UNIQUE. For weekly imports from messy sources, build a Power Query and refresh. Match the tool to the job and you'll never accidentally delete the wrong rows again.
Practice makes the difference between fumbling around and finishing in two minutes. The methods above cover 95% of dedup scenarios you'll meet in real work — sales lists, customer databases, survey responses, inventory exports, transaction logs. Build a sample workbook with deliberately messy data, try each method, and watch which one fits your style. Test yourself on the quiz below to lock it in.
One last note on workflows. If you're handing off a dedup process to a colleague who'll run it weekly, document the steps and lean on Power Query. The query is self-documenting — every transformation step appears in the Applied Steps pane, so whoever inherits the workbook can read what happened in plain English. Compare that to a chain of helper columns and pasted-as-values operations, which leave no trace of how the data got cleaned. Future you will thank present you for the half hour of setup.
And if you're working with imports from a CRM, an ecommerce export, or a survey tool, expect dirty data every time. Source systems leak whitespace, mixed case, and weird Unicode characters constantly. A Power Query that trims, lowers, deduplicates, and loads — built once — handles every refresh forever. That's the difference between "I clean this file every Monday" and "I refresh the report and have coffee." Worth the investment.
Excel Questions and Answers
About the Author
Attorney & Bar Exam Preparation Specialist
Yale Law SchoolJames R. Hargrove is a practicing attorney and legal educator with a Juris Doctor from Yale Law School and an LLM in Constitutional Law. With over a decade of experience coaching bar exam candidates across multiple jurisdictions, he specializes in MBE strategy, state-specific essay preparation, and multistate performance test techniques.