Convert JSON to CSV

Flatten nested objects, explode arrays into rows, handle JSONLines and varied schemas. The result is a tabular CSV ready for any spreadsheet or pipeline.

When you'd convert JSON to CSV

  • Importing API responses into a spreadsheet. APIs love JSON. Spreadsheets love CSV. Converting in between is the bridge most people end up walking.
  • Feeding BI tools and warehouses. Most ingestion paths into BigQuery, Snowflake, or Looker take CSV (or Parquet). Flat JSON-to-CSV is the typical first step in productionising an API extract.
  • Quick analysis of API output. A long JSON array is hard to scan. The same data as CSV opens in Excel or anywhere else and is immediately filterable.
  • Sharing with someone who doesn't read JSON. JSON is fine for engineers. For most other people it's a wall of brackets. CSV is the lower common denominator.
  • Diffing two API snapshots. JSON diffs are noisy because of formatting and ordering. Sorted CSV diffs reveal what actually changed.

Worked example: nested objects flatten with dot notation

An API response with a small nested customer object:

[
  { "order_id": 1001, "customer": { "id": 42, "name": "Acme Corp" }, "qty": 3 },
  { "order_id": 1002, "customer": { "id": 17, "name": "Beta Inc"  }, "qty": 1 }
]

By default the CSV output flattens one level using a dot:

order_id,customer.id,customer.name,qty
1001,42,Acme Corp,3
1002,17,Beta Inc,1

If the customer object had its own nested address, you'd see customer.address.city and so on. You can change the separator to underscore, or limit how deep we flatten, in the export dialog.

Format-specific gotchas

  • Nested arrays force a choice. If a record has tags: ["a", "b", "c"], you either explode the parent into three rows (one per tag) or stringify the array as ["a","b","c"] in one cell. The right answer depends on what you'll do with the CSV. Default is stringify.
  • Heterogeneous schemas. JSON arrays often have records with different keys. We union the keys; missing values become empty cells. The column profile shows you which columns have null/empty and how often.
  • JSONLines (NDJSON) vs JSON arrays. Both are detected automatically. JSONLines is preferred for streaming; one record per line means we don't have to parse the whole file at once.
  • Type coercion across records. If half the records have qty: 3 (integer) and the other half have qty: "3" (string), DuckDB picks the more permissive type (string) to preserve all values. Worth normalising at the source if you can.
  • Booleans. JSON booleans become "true"/"false" in CSV. If your downstream wants 1/0 or Yes/No, change the column type before exporting.
  • Ordering. JSON object keys aren't ordered by spec. The CSV column order follows the order DuckDB saw keys in. If you need a specific column order, reorder columns in the explorer before exporting.

How this differs from the alternatives

  • vs CSVJSON (json2csv). Maintained by Flatfile. Has a flatten option, but the page is mostly an upsell for their enterprise importer and there's no preview of the resulting CSV inline. ExploreMyData shows the full schema and a real table preview before you export.
  • vs ConvertCSV (json-to-csv). Feature-rich (custom delimiters, JSONLines support, date formatting), but the UI is dense and ad-supported, and processing slows on large inputs. ExploreMyData has a cleaner pipeline and no ads on the converter itself.
  • vs Konklone (json to csv). Open-source and entirely client-side, with a live preview. The page itself warns that "extremely large files may cause trouble." ExploreMyData uses DuckDB-WASM which streams the parse, so size is bounded by browser memory rather than parser speed.
  • vs jq. Excellent CLI for extracting fields: jq -r '.[] | [.a,.b] | @csv'. The friction is the install (Linux/Mac/Win all separately), and you have to hand-write the field projection plus pre-flatten any nested arrays. ExploreMyData handles flattening and missing keys automatically.
  • vs pandas (json_normalize). Powerful, with explicit record/meta paths for complex nesting. Requires Python plus pandas, and you need enough familiarity to pick the right normalize options for your shape. ExploreMyData makes the common case point-and-click.

Frequently Asked Questions

How are nested objects flattened?

By default, nested keys are joined with a dot. {"user": {"id": 42, "name": "Alice"}} becomes columns user.id and user.name. You can change the separator (e.g., underscore) or stop at a specific depth in the export dialog.

What about nested arrays?

You have two options. Explode the array (each element becomes its own row, with the parent fields repeated) or stringify it (the array stays as JSON inside one cell). Explode is the default for top-level arrays of objects.

Does this support JSONLines (one JSON object per line)?

Yes. JSONLines and NDJSON files are auto-detected. Each line is treated as one record. Mixed-schema lines are unified by union of keys; missing keys become empty cells in the CSV.

What if rows in the JSON have different keys?

ExploreMyData unions the keys across all records and emits a column for each unique key. Records that don't have a particular key get an empty value in that column. You see the full schema in the column profile before exporting.

How are nulls represented?

JSON null becomes an empty CSV cell by default (the most common convention). Missing keys also become empty cells. If you need them to differ, switch one to a literal string like "NULL" in the export dialog.

Is there a size limit?

No fixed cap. JSON is parsed by DuckDB-WASM, which streams large arrays without loading everything into memory at once. Files in the hundreds of megabytes work routinely.

Convert your JSON to CSV

No sign-up, no upload. Nested objects flatten cleanly; arrays explode or stringify, your choice.

Open the converter