Convert Parquet to CSV in your browser

Open any Parquet file (Snappy, Zstd, Gzip), preview the data, pick the columns you want, and export to CSV. No upload, no install, no size cap.

When you'd convert Parquet to CSV

CSV is the lower common denominator. Parquet is better for analytics, but you'll often want to convert when:

  • Handing data to a non-technical recipient. Most people open spreadsheets, not Parquet. Excel, Numbers, Google Sheets all read CSV directly. None of them read Parquet without an extension.
  • Feeding a legacy or specific-purpose tool. ETL platforms, accounting software, niche scientific tools, even some BI dashboards still expect CSV input. Parquet support is uneven outside the data-engineering stack.
  • Eyeballing the data. Parquet is a binary format. You can't open it in a text editor. Sometimes you just want to see the rows, and CSV is the fastest path to "open in any text editor."
  • Sharing a sample without sharing infrastructure. A 50 MB CSV attached to an email is a working dataset. The same data as a Parquet file is a "what tool do I open this with?" question.
  • Diff and version control. Git plays better with text. If you want to spot what changed between two snapshots, CSV diffs are readable; Parquet diffs are not.

Worked example

Take a Parquet file with this schema:

order_id     INT64
customer     STRUCT(id INT32, name STRING)
quantity     INT32
total_cents  INT64
ordered_at   TIMESTAMP_NS

By default the CSV output flattens structs and writes timestamps as ISO 8601:

order_id,customer,quantity,total_cents,ordered_at
1001,"{""id"":42,""name"":""Acme Corp""}",3,8700,2026-01-15T14:30:00
1002,"{""id"":17,""name"":""Beta Inc""}",1,4500,2026-01-16T09:12:33

If you'd rather have customer.id and customer.name as their own columns, do that in the explorer with a SQL expression (SELECT order_id, customer.id AS customer_id, customer.name AS customer_name, ...) before exporting. The pipeline is recorded so the result is reproducible.

Format-specific gotchas

  • Type metadata is lost. CSV columns are all strings on the wire. The recipient's tool re-infers types and may guess differently than your source. If type fidelity matters, ship Parquet, not CSV.
  • Timestamps and timezones. ExploreMyData writes ISO 8601 by default. If your downstream expects "2026-01-15 14:30:00" without the T separator (some BI tools), reformat the column with a SQL expression first.
  • NULL vs empty string. Parquet has real NULLs; CSV does not. ExploreMyData writes NULL as an empty field by default, which is convention but not universal. If your tool needs the literal string "NULL", configure that in the export dialog.
  • Nested types (struct, list, map). CSV is flat. Defaults to JSON-stringifying nested values inside the cell, which round-trips but is awkward for spreadsheet users. Flatten first if a human is going to look at it.
  • Decimals. Parquet DECIMAL columns serialise as numbers in CSV with full precision. Your downstream tool may then parse them as floats and silently round; if exact decimal arithmetic matters, communicate that to whoever consumes the file.
  • File size grows. A 200 MB Parquet often becomes 1 to 2 GB as CSV. Worth noting before you email the result.

How this differs from the alternatives

  • vs MyGeodata Cloud. Server-side: you upload the Parquet to their infrastructure (cap 5 GB), they convert, you download. Parquet often holds analytics data you don't want to upload anywhere. ExploreMyData parses locally with no upload.
  • vs parquettocsv.online and parquettocsv.com. Both run in the browser, so privacy is comparable. Hard caps differ: parquettocsv.online is 10 MB per file, parquettocsv.com is 100 MB. ExploreMyData has no hardcoded ceiling because it streams via DuckDB-WASM.
  • vs ParquetReader. Free preview, full export gated behind upgrade. ExploreMyData exports the full file for free.
  • vs pandas (read_parquet + to_csv). Works well for files that fit in RAM, but you need Python plus pyarrow or fastparquet, and pandas materialises the whole DataFrame in memory. ExploreMyData uses DuckDB's streaming reader, so RAM isn't the bottleneck.
  • vs DuckDB CLI. If you have it installed, COPY (SELECT * FROM 'in.parquet') TO 'out.csv' (HEADER, DELIMITER ','); is fastest. ExploreMyData is the same engine in a browser tab.
  • vs Tad Viewer. Tad is a free Electron desktop app for viewing Parquet, with CSV export. Works offline once installed. ExploreMyData runs in any browser without an install per OS.

Frequently Asked Questions

Why would I convert Parquet to CSV?

Parquet is the right format for analytics workloads, but CSV is still what most spreadsheets, legacy tools, and non-technical recipients expect. If you're handing data to someone who'll open it in Excel, importing into a system that doesn't read Parquet, or just eyeballing the data, CSV is the practical answer.

Does this work with Snappy and Zstd-compressed Parquet?

Yes. DuckDB-WASM reads Parquet files compressed with Snappy, Gzip, Zstd, LZ4, and Brotli transparently. You don't need to know the compression up front; drop the file in and it opens.

How are timestamps represented in the CSV output?

ExploreMyData writes timestamps in ISO 8601 (e.g., 2026-01-15T14:30:00). If the Parquet column has timezone information, that's preserved. You can override the format before exporting if your downstream tool expects something specific.

What happens to nested columns (struct, list, map)?

CSV is flat, so nested columns need handling. By default ExploreMyData writes nested values as JSON strings inside the CSV cell, which keeps the data round-trippable. If you'd rather flatten a struct into multiple columns, do that with a SQL expression in the explorer before exporting.

Is there a file size limit?

No fixed cap. Limits depend on your browser's available memory and how much your machine can hold. DuckDB-WASM streams the read, so multi-gigabyte Parquet files often work without issue, while equivalent Python (pandas) approaches typically need everything in RAM.

Can I pick which columns to include in the CSV?

Yes. Open the file, deselect the columns you don't want, and export. You can also filter rows or add computed columns before the export, all without leaving the browser.

Convert your Parquet to CSV

No sign-up, no upload, no row cap. Open your Parquet, pick columns, export.

Open the converter