Flatten nested objects, explode arrays into rows, handle JSONLines and varied schemas. The result is a tabular CSV ready for any spreadsheet or pipeline.
An API response with a small nested customer object:
[
{ "order_id": 1001, "customer": { "id": 42, "name": "Acme Corp" }, "qty": 3 },
{ "order_id": 1002, "customer": { "id": 17, "name": "Beta Inc" }, "qty": 1 }
]
By default the CSV output flattens one level using a dot:
order_id,customer.id,customer.name,qty
1001,42,Acme Corp,3
1002,17,Beta Inc,1
If the customer object had its own nested address, you'd see customer.address.city and so on. You can change the separator to underscore, or limit how deep we flatten, in the export dialog.
tags: ["a", "b", "c"], you either explode the parent into three rows (one per tag) or stringify the array as ["a","b","c"] in one cell. The right answer depends on what you'll do with the CSV. Default is stringify.qty: 3 (integer) and the other half have qty: "3" (string), DuckDB picks the more permissive type (string) to preserve all values. Worth normalising at the source if you can.jq -r '.[] | [.a,.b] | @csv'. The friction is the install (Linux/Mac/Win all separately), and you have to hand-write the field projection plus pre-flatten any nested arrays. ExploreMyData handles flattening and missing keys automatically.json_normalize). Powerful, with explicit record/meta paths for complex nesting. Requires Python plus pandas, and you need enough familiarity to pick the right normalize options for your shape. ExploreMyData makes the common case point-and-click.By default, nested keys are joined with a dot. {"user": {"id": 42, "name": "Alice"}} becomes columns user.id and user.name. You can change the separator (e.g., underscore) or stop at a specific depth in the export dialog.
You have two options. Explode the array (each element becomes its own row, with the parent fields repeated) or stringify it (the array stays as JSON inside one cell). Explode is the default for top-level arrays of objects.
Yes. JSONLines and NDJSON files are auto-detected. Each line is treated as one record. Mixed-schema lines are unified by union of keys; missing keys become empty cells in the CSV.
ExploreMyData unions the keys across all records and emits a column for each unique key. Records that don't have a particular key get an empty value in that column. You see the full schema in the column profile before exporting.
JSON null becomes an empty CSV cell by default (the most common convention). Missing keys also become empty cells. If you need them to differ, switch one to a literal string like "NULL" in the export dialog.
No fixed cap. JSON is parsed by DuckDB-WASM, which streams large arrays without loading everything into memory at once. Files in the hundreds of megabytes work routinely.
No sign-up, no upload. Nested objects flatten cleanly; arrays explode or stringify, your choice.
Open the converter