CSV Explorer Online

Profile every column, build filters from the histogram, and chain transformations into a pipeline you can rewind. No upload, no install, free.

"View" is not the same as "explore"

A viewer is something you drop a file into to see it. You scroll, you maybe filter, you maybe export. That covers a real chunk of work, and we have a CSV viewer for exactly that.

Exploration is different. You're trying to figure out what a dataset actually contains. Are these phone numbers? How many distinct values are in this column? What does the date range look like? Is there a typo in the company name that's splitting it into two categories? You'll filter, then transform, then filter again, then back up two steps and try a different cut. The shape of that work is iterative and column-aware.

The explorer is built for that loop. It defaults to surfacing structure (per-column profiles, value distributions, type inference) before you ask, and it remembers what you've done so you can rewind.

Per-column explore cards

Every column in your CSV gets a card. The card tells you what the column is at a glance:

  • Inferred type (integer, float, date, string) and whether the inference is confident.
  • Null count and null percentage. Empty values are counted explicitly so you don't miss them.
  • Distinct count, plus the most common values for categorical columns.
  • For numeric columns: min, max, mean, median, and a histogram you can click into.
  • For dates: the date range and a histogram bucketed by day, week, or month.
  • For strings: length distribution and the most common values.

You don't write a query to get any of this. Open the file, the cards are there.

Filters that match what you see

The standard filter UX in a spreadsheet is a textbox: type a value, hope you spelled it right, hit enter. The explorer's filters come from the histogram on the column card. Click a bar to keep that bucket, shift-click to add another, drag across a range to keep everything in between.

The result: filters that line up with the data you're looking at, including the long tail you'd otherwise miss. A category filter shows you "Acme Corp" and "Acme Corp " (trailing space) as separate buckets, so you notice they're the same company before you build a chart on top.

Numeric filters are continuous. You can pull a date filter to "last 90 days" by dragging on the timeline, no parsing required.

Transformations as a pipeline you can rewind

Every operation you apply (filter, group, pivot, type fix, calculated column, join) gets added to a visible pipeline. Each step shows what it did. You can edit a step's parameters, move it earlier or later, or delete it. The result re-runs from the source.

That's the difference between a real exploration tool and a one-shot viewer. You're not committing to choices the moment you make them. If a filter cuts off too much data, you fix it; you don't re-import the file.

The pipeline is also durable. You can save the workspace and come back later. The file lives in your browser's storage, and the pipeline definition lives next to it.

SQL when the UI runs out

Most exploration is faster with a UI. Some of it is faster with SQL. The explorer doesn't make you choose: every operation is backed by the same DuckDB-WASM session, and you can drop into a SQL editor at any point. Run a window function, a recursive CTE, a regex extraction, then continue building on the result with the visual pipeline.

Because DuckDB is a real SQL engine, the queries you'd write against a production warehouse work here too. Most syntax (window functions, joins, set operations, string functions, date arithmetic) is identical.

When to use the explorer vs the viewer

Both run on the same engine. The split is about intent.

  • Use the CSV viewer when someone hands you a file and you need to see it. Open, scroll, maybe filter, maybe export. Done.
  • Use the explorer when you have a question to answer. You'll iterate, build up transformations, want to rewind a step, run a quick SQL query, save the workspace. The explorer is built to live with the file for a while.

Frequently Asked Questions

What's the difference between the CSV explorer and the CSV viewer?

The CSV viewer is built around opening a file and looking at it: drop, scroll, filter, export. The CSV explorer is built around interactive analysis: per-column profile cards, filters that read off the histogram, and a transformation pipeline you can rewind. Same engine, different intent.

Can I save a transformation pipeline and re-run it on a new file?

Yes. Operations are recorded in order; you can edit, reorder, or remove any step. The pipeline metadata stays attached to the workspace and can be re-applied to another file with the same schema.

Does the explorer use SQL or a no-code interface?

Both. Most operations are point-and-click (filter, group, pivot, join, fix types). When you need something the UI doesn't cover, you can drop into a SQL editor that runs against the same DuckDB session, then continue building on the result.

What size CSV can the explorer handle?

Limited by your browser's available memory rather than a hardcoded cap. Most browsers comfortably handle files in the hundreds of megabytes; DuckDB-WASM processes millions of rows without leaving the tab.

Does my CSV get uploaded anywhere?

No. The file is opened locally in the browser. SQL, filters, transformations, and exports all run in your tab. There's no server roundtrip and no API to send data to.

Open a CSV in the explorer

Drop a file. Skim the column cards. Build a pipeline. Free, no sign-up, your data stays on your device.

Start exploring