Tools: Stop hand-writing INSERTs: JSON SQL with dialect-aware output(Postgres/MySQL/SQLite)

Tools: Stop hand-writing INSERTs: JSON SQL with dialect-aware output(Postgres/MySQL/SQLite)

Source: Dev.to

The minimal workflow that actually works ## 1) Start with a JSON array of objects ## 2) Decide a stable column order ## 3) Choose a strategy for nested values ## 4) Pick the SQL dialect before generating output ## A short example (JSON → CREATE TABLE + INSERT) ## Dialect notes (Postgres vs MySQL vs SQLite) ## 1) Identifiers and quoting ## 2) Data types aren’t identical ## 3) Booleans and NULL behavior ## Common pitfalls (and how to avoid them) ## Pitfall #1: Mixed types across rows ## Pitfall #2: NULLs hide your real type ## Pitfall #3: Reserved keywords and unsafe column names ## Want a quick converter? ## If you found this useful When you need to import JSON into a relational database, the “easy part” (copying data) quickly turns into the hard part: writing a schema and a pile of INSERT statements that actually run on your target database. In practice, JSON → SQL gets annoying for three reasons: Below is a practical workflow that works for the most common case (an array of objects you want to import as rows), plus a small example and a checklist of gotchas. If you can shape your data to an array of objects, the mapping is straightforward: If your JSON is a single object (not an array), you can often wrap it into a one-element array for importing. You’ll save time later if you keep column order consistent: Consistency matters because it keeps your INSERT columns and values aligned. You have to decide what to do with nested objects/arrays. There are two common strategies: For a “get it into the DB quickly” workflow, stringify nested values is usually the least painful. This is the part people skip—and it’s the reason “valid SQL” still fails. …and generate SQL that matches that target. A reasonable SQL output (conceptually) looks like this: This is intentionally “boring SQL”: You don’t need to memorize everything, but there are a few recurring differences worth calling out. Column names like order, group, user, etc. can collide with reserved keywords. If you’re generating SQL from arbitrary JSON keys, you need safe identifier quoting (or key normalization). Even when names look similar, behavior differs: For imports from JSON, the pragmatic path is to use a small set of common types: For JSON imports, your generator should normalize booleans and nulls carefully. You’ll see this in the wild: What type is value? If you choose INTEGER, the string row breaks. If you choose TEXT, you lose numeric semantics. A column may be null for many rows, which makes naive type inference pick the wrong type. These can break SQL or become annoying to query. If you’d rather not hand-write schema + inserts, I built a free online JSON → SQL tool that: Tool: https://jsontotable.net/json-to-sql If you want more examples and notes, here’s the guide: https://jsontotable.net/blog/export-and-share/json-to-sql-create-table-insert-dialect-guide Tell me what your hardest JSON→SQL case looks like (nested data? mixed types? huge payloads?). I’m collecting real-world cases to improve the converter’s defaults. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true}, {"id": 2, "name": "Bob", "active": false} ] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true}, {"id": 2, "name": "Bob", "active": false} ] CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true}, {"id": 2, "name": "Bob", "active": false} ] CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true, "score": 12.5}, {"id": 2, "name": "Bob", "active": false, "score": null} ] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true, "score": 12.5}, {"id": 2, "name": "Bob", "active": false, "score": null} ] CODE_BLOCK: [ {"id": 1, "name": "Alice", "active": true, "score": 12.5}, {"id": 2, "name": "Bob", "active": false, "score": null} ] CODE_BLOCK: CREATE TABLE users ( id INTEGER, name TEXT, active BOOLEAN, score REAL ); INSERT INTO users (id, name, active, score) VALUES (1, 'Alice', TRUE, 12.5), (2, 'Bob', FALSE, NULL); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: CREATE TABLE users ( id INTEGER, name TEXT, active BOOLEAN, score REAL ); INSERT INTO users (id, name, active, score) VALUES (1, 'Alice', TRUE, 12.5), (2, 'Bob', FALSE, NULL); CODE_BLOCK: CREATE TABLE users ( id INTEGER, name TEXT, active BOOLEAN, score REAL ); INSERT INTO users (id, name, active, score) VALUES (1, 'Alice', TRUE, 12.5), (2, 'Bob', FALSE, NULL); CODE_BLOCK: [ {"value": 1}, {"value": "2"} ] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: [ {"value": 1}, {"value": "2"} ] CODE_BLOCK: [ {"value": 1}, {"value": "2"} ] - Type inference: JSON doesn’t carry a strict schema, so you have to decide what types to use. - Arrays / nesting: nested objects and arrays don’t map cleanly into a single flat table. - Dialect differences: “SQL” isn’t one thing—Postgres, MySQL, and SQLite have enough differences to cause real runtime errors. - each object → one row - object keys → column names - Use the key order from the first object (simple and predictable) - Or sort keys alphabetically (stable across runs) - Flatten nested keys (e.g., address.city → address_city) — best when you control the schema and need queryable columns. - Stringify nested values (store as text/JSON) — best for quick imports when you want deterministic output. - It’s readable. - It’s easy to run. - It makes null-handling explicit. - Postgres often uses double quotes for identifiers: "user" - MySQL commonly uses backticks: `user` - SQLite accepts double quotes, but behavior can vary depending on settings - SQLite is famously permissive (dynamic typing). Many “types” are affinity hints. - Postgres has strict types and richer type options. - MySQL types are strict-ish but different again. - integers → INTEGER - floats → REAL (or DOUBLE depending on dialect) - strings → TEXT - booleans → BOOLEAN - Postgres supports TRUE/FALSE and a real boolean type. - MySQL also supports boolean-ish behavior (often as TINYINT(1) under the hood). - SQLite doesn’t have a real boolean type; it stores 0/1 (but accepts many inputs). - default to TEXT when types conflict, or - add a “strict” mode that rejects mixed-type columns. - infer type from non-null values first; treat nulls as “unknown” until you see a real value. - quote identifiers properly for your dialect, and/or - normalize keys (e.g., created-at → created_at). - generates CREATE TABLE + INSERT - lets you choose a dialect (Postgres/MySQL/SQLite) - supports practical options like column ordering and export