Input

Output

What is the Avro to CSV Converter?

You have a batch of Apache Avro records — say a day's worth of orders pulled out of Kafka — encoded as JSON. The analyst on the other side wants a spreadsheet. This tool turns an array of JSON-encoded Avro records into CSV with proper RFC 4180 quoting. Paste on the left, copy CSV from the right.

The converter reads every object in the input array, takes the union of their keys as the column header, and builds rows in that column order. Missing keys come out as empty cells. Nested objects (a record inside a record, an array of items, etc.) are JSON-stringified into a single cell, since CSV is a flat format and there is no spec-blessed way to nest. Quoting follows RFC 4180: any value containing a comma, double quote, or newline is wrapped in double quotes and inner quotes are doubled.

No upload. The browser parses your JSON with native <a href="https://datatracker.ietf.org/doc/html/rfc8259" target="_blank" rel="noopener">RFC 8259 JSON</a> and walks the array. Nothing is sent anywhere.

How to Use It

Three steps. The buttons described below are the ones on the page.

1

Paste an Array of Avro Records

The input must be a JSON array. Each element should be an Avro record encoded as JSON. Upload handles .json files; Sample loads three Order records you can use as a template:

[{"orderId":"ORD-58231","customerId":"CUST-1001","totalCents":12999,"currency":"USD"}, …]

A single object also works — it gets wrapped into a one-row CSV. Two-dimensional CSV needs an array of records.

2

Read the CSV Output

The right panel renders the header row first, then one row per record. Output refreshes 300 ms after you stop typing. Cells with commas, quotes, or newlines are quoted automatically — paste the result straight into Excel, Google Sheets, or pandas.read_csv.

3

Copy or Download

Copy grabs the CSV; Download saves it as output.csv. Conversion is fully client-side. Documentation on the underlying JSON parsing lives at MDN if you are curious.

When You'd Actually Use This

Sharing Data with Analysts

Engineers think in records and schemas. Analysts open Excel. Avro to CSV is the boundary tool — your team produces Avro, theirs consumes a CSV.

Quick Database Imports

Most databases ship a COPY FROM or LOAD DATA command that eats CSV directly. Pull a batch of records out of Schema Registry-backed topics, paste, save, import.

Spotting Bad Records

A 5,000-row CSV in a spreadsheet shows outliers fast — sort by amount, scan for nulls, find the row where currency is empty. Hard to do with a JSON blob.

Lightweight Reports

Need to email a snapshot to a stakeholder? CSV attaches cleanly, opens everywhere, and survives every email client. JSON does not.

Common Questions

How are nested records and arrays handled?

CSV is a flat, two-dimensional format — there is no nesting in the standard. Nested objects and arrays are serialized as a JSON string inside the cell, so the data is preserved but not split across columns. If you need flattening (parent.child columns), do a pre-pass on the JSON before pasting.

What about commas and quotes in values?

They are handled per RFC 4180. A cell containing a comma, double quote, or newline gets wrapped in double quotes. Inner double quotes are doubled (" becomes ""). Excel, LibreOffice Calc, Google Sheets, and pandas all read this back correctly.

What if my records have different fields?

The header row is the union of all keys across all records, in first-seen order. A record missing a column simply gets an empty cell for that column. That matches what most CSV readers expect.

Can I paste a binary .avro container file?

Not directly. This tool reads JSON. For an OCF binary file, run avro-tools tojson sample.avro first, then paste the JSON output here. (One detail: tojson emits one record per line; wrap them in [ ] with comma separators before pasting.)

Does my data go to a server?

No. Parsing and CSV generation run entirely in your browser. Nothing is uploaded, nothing is logged, nothing leaves the page.

How big a JSON array can I paste?

A few thousand records is no problem. Past tens of thousands the Ace editor itself starts to lag — that is the bottleneck. For very large datasets, do the conversion in a script (Python csv + json modules, or pandas read_json + to_csv).

Other Avro and CSV Tools

Once your data is in CSV, these tools cover the next steps: