Open any Parquet file (Snappy, Zstd, Gzip), preview the data, pick the columns you want, and export to CSV. No upload, no install, no size cap.
CSV is the lower common denominator. Parquet is better for analytics, but you'll often want to convert when:
Take a Parquet file with this schema:
order_id INT64
customer STRUCT(id INT32, name STRING)
quantity INT32
total_cents INT64
ordered_at TIMESTAMP_NS
By default the CSV output flattens structs and writes timestamps as ISO 8601:
order_id,customer,quantity,total_cents,ordered_at
1001,"{""id"":42,""name"":""Acme Corp""}",3,8700,2026-01-15T14:30:00
1002,"{""id"":17,""name"":""Beta Inc""}",1,4500,2026-01-16T09:12:33
If you'd rather have customer.id and customer.name as their own columns, do that in the explorer with a SQL expression (SELECT order_id, customer.id AS customer_id, customer.name AS customer_name, ...) before exporting. The pipeline is recorded so the result is reproducible.
read_parquet + to_csv). Works well for files that fit in RAM, but you need Python plus pyarrow or fastparquet, and pandas materialises the whole DataFrame in memory. ExploreMyData uses DuckDB's streaming reader, so RAM isn't the bottleneck.COPY (SELECT * FROM 'in.parquet') TO 'out.csv' (HEADER, DELIMITER ','); is fastest. ExploreMyData is the same engine in a browser tab.Parquet is the right format for analytics workloads, but CSV is still what most spreadsheets, legacy tools, and non-technical recipients expect. If you're handing data to someone who'll open it in Excel, importing into a system that doesn't read Parquet, or just eyeballing the data, CSV is the practical answer.
Yes. DuckDB-WASM reads Parquet files compressed with Snappy, Gzip, Zstd, LZ4, and Brotli transparently. You don't need to know the compression up front; drop the file in and it opens.
ExploreMyData writes timestamps in ISO 8601 (e.g., 2026-01-15T14:30:00). If the Parquet column has timezone information, that's preserved. You can override the format before exporting if your downstream tool expects something specific.
CSV is flat, so nested columns need handling. By default ExploreMyData writes nested values as JSON strings inside the CSV cell, which keeps the data round-trippable. If you'd rather flatten a struct into multiple columns, do that with a SQL expression in the explorer before exporting.
No fixed cap. Limits depend on your browser's available memory and how much your machine can hold. DuckDB-WASM streams the read, so multi-gigabyte Parquet files often work without issue, while equivalent Python (pandas) approaches typically need everything in RAM.
Yes. Open the file, deselect the columns you don't want, and export. You can also filter rows or add computed columns before the export, all without leaving the browser.
No sign-up, no upload, no row cap. Open your Parquet, pick columns, export.
Open the converter