CSV (Comma-Separated Values) is the oldest and most ubiquitous tabular data format - the spec everyone agrees to disagree about. RFC 4180 codified one common dialect in 2005, but in practice every database, spreadsheet, and BI tool emits a slightly different variant. This viewer parses the most-common variants, renders them as a filterable table, and converts losslessly to and from JSON.
The parser is a hand-written state machine you can read in the source above (parseCSV) - small enough to audit, but full enough to handle the real edge cases: quoted fields containing the delimiter, doubled-up quotes ("") to escape a literal quote inside a quoted field, and CRLF or LF line endings that mix freely in files originating from Windows and Unix. For very large files or pathological dialects, swap to PapaParse, but for inspection workflows the inline parser keeps the bundle small and the privacy story simple.
Delimiter choice matters more than people expect. The default is a comma, but European spreadsheets default to semicolon (because comma is a decimal separator in most non-English locales), database exports often use tab (giving you TSV), and pipe-delimited ("|") files appear in older mainframe and ETL pipelines. The dropdown switches the parser and round-trip exporter at the same time, so converting a semicolon CSV to JSON and back gives you semicolon CSV again.
The JSON conversion treats the first row as headers and produces an array of plain objects, one per data row. Going the other way, the JSON-to-CSV exporter takes the keys of the first object as headers - if later objects have extra keys, those columns are dropped. The escape function quotes any field containing the delimiter, a double quote, or a newline, exactly as RFC 4180 specifies, so the output round-trips through Excel and Numbers cleanly.
Known CSV pitfalls the parser handles: fields wrapped in double quotes that contain commas ("Smith, John"), embedded newlines inside quoted fields, and the doubled-quote escape ("He said ""hi""" decodes to He said "hi"). What it doesn't handle: BOM markers (Excel sometimes prepends 0xEF 0xBB 0xBF to UTF-8 CSV - strip the first three bytes if your headers look mangled), trailing whitespace inside fields by design (it's trimmed), and MS-specific quirks like leading apostrophes that prevent Excel from auto-converting strings to numbers.
Live row filtering does a case-insensitive substring search across every cell. It's linear and runs on each keystroke - fine up to about 50,000 rows on a modern laptop. Beyond that you'll feel the lag because re-rendering the table dominates. The fastest way to slim a huge CSV in this tool is to filter to your subset, copy the JSON output, and reload that subset for further work.
Privacy: the parser, table renderer, and JSON conversion all run in JavaScript inside your tab. There is no upload, no analytics on the data, and no server round-trip during conversion. The CSV/JSON you paste stays in your browser memory and is gone when the tab closes - which matters when the data is a customer export, a salary roster, or anything else you don't want sitting in a third-party log file.