Tools
as a record separator, blank line as header/body boundary, @var = at top-of-file scope, # @name foo as opt-in request identity. You can describe the whole grammar in ten lines of BNF. You can write a parser for it in 200 lines without tears. You can already edit it in every modern IDE. And as of Node 20 you can execute it with zero dependencies. If you're keeping scratch API requests in your repo anyway — and you should — http-runner turns them into the smoke test you've been meaning to write. Source and Docker image on GitHub: https://github.com/sen-ltd/http-runner. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
Tools: Essential Guide: I Built a CLI Runner for VS Code's `.http` Files in ~500 Lines of TypeScript
I Built a CLI Runner for VS Code's .http Files in ~500 Lines of TypeScript
The .http format is better than it looks
The parser
The interpolator
The runner: one line of fetch, one testability choice
Trying it
Tradeoffs worth naming
Closing A zero-dependency Node 20 CLI that parses and executes the .http format you already edit inside VS Code, JetBrains, or Neovim. No new syntax to learn, no GUI to open, no Rust to install — just run the file from a shell or CI. 🔗 GitHub: https://github.com/sen-ltd/http-runner I had a pattern I kept repeating across projects. Next to src/ there would be a scratch.http file with a dozen requests I used while developing — auth flows, broken endpoints I was fixing, examples for teammates. I'd hit ⌘+click on "Send Request" inside VS Code and it worked. But then I wanted to run the same file from CI as a smoke test, and the options looked like this: None of these is wrong. They're just all different files. The one file I already have — the one open in my editor while I'm developing — is the .http file. I wanted a tool whose only job was to read it and run it. So I built http-runner: a Node 20 CLI with zero runtime dependencies, written in strict TypeScript, that parses the VS Code REST Client subset of the .http format and executes the requests with the built-in fetch. About 500 lines of source, 52 vitest tests, and a 136 MB alpine Docker image. In this post I'm going to walk through the surprisingly well-designed .http format itself, the parser, the {{var}} interpolator, and the one testing trick that made the whole thing pleasant to work on: injecting fetch. I'd been using .http files for years without ever thinking about the grammar. When I sat down to write a parser, the first thing I did was try to describe the format precisely, and I realized how tight it actually is: There are three things that make this work.
is an unambiguous record separator. No HTTP message ever legitimately starts with ###. No header value legitimately has a line starting with ###. The grammar can be recovered from any cursor position: scan forward to the next
and you know you're at a new request. It's the same trick YAML tried to pull off with --- but with a separator that looks even less like data. A blank line means "body starts here". This is literally how the HTTP wire format already works (CRLF CRLF ends the headers). So the parser is mirroring a rule the author already knows from other contexts. You don't have to explain it. Everything that isn't these three things is a comment. #, //, lines the parser doesn't recognize before a request line — all skipped. This is incredibly forgiving. You can annotate your .http file with anything and it just works. Two design decisions feel subtle but made the parser shorter: (1) variables can be declared anywhere in the file, not just at the top, so the parser does a single forward pass collecting variables and requests interleaved; and (2) the body is "everything up to the next ###", including blank lines inside the body, which means you don't need to parse the body content at all. JSON, XML, form-encoded, a binary stub with a
sentinel at the end — the parser treats them identically. It's just text between markers. Here's the heart of it — a straightforward forward scan that keeps track of whether we're in header-land or body-land: One thing I'd like to call out: the parser does not interpolate {{var}} references. It returns the raw URL {{host}}/users unchanged. I was tempted to interpolate during parsing — seems efficient, save an extra pass — but then the parser depends on environment variables, which means a unit test of the parser depends on the environment, which means I'd have to mock process.env in parser tests, which means parser tests are no longer about parsing. Splitting parse from interpolate turned out to cost nothing. The parser is 200 lines, pure, tested against fixtures with zero environment setup. The interpolator is another 70 lines that runs after. Each has its own vitest file and it's the cleanest code boundary in the project. This is boring on purpose. Two things the VS Code REST Client supports that I deliberately left out: random/UUID/date helpers ({{$guid}}, {{$randomInt 0 100}}) and chained-request references ({{previousRequest.response.body.$.token}}). They're real features in real tools. I didn't include them for two reasons. First, there is no standard. Every tool does these slightly differently. If I implement the VS Code variant, my tool only parses VS Code's dialect. If I invent my own, I'm splitting the ecosystem further. Leaving them out keeps my file 100% compatible with every other .http runner in the world. Second, {{$env.FOO}} is an escape hatch. Want a random UUID? uuidgen | API_ID=$(cat) http-runner api.http and reference {{$env.API_ID}}. Want a token from a previous request? Run that request first with --output json, jq the token, export it, run the next file. It's more verbose on the command line, but the .http files stay simple and portable. Here's the entire HTTP-execution logic: Two decisions worth flagging. Node 20's built-in fetch is a zero-dependency superpower. In 2023 my package.json would have had node-fetch or undici pinned. In 2026 it has nothing. The string "dependencies" doesn't even appear in my package.json. The runtime Docker image doesn't have a node_modules directory — it's literally compiled dist/ and a copy of package.json. The final image is still 136 MB, all of it node:20-alpine itself, which means my code contributes almost zero bytes. That's a nice flex. opts.fetch is injected. You can see opts.fetch(...) being called, not a global. Every test passes a fake fetch that returns a synthetic Response. Zero network, zero nock, zero msw, no fixture servers. Vitest runs all 52 tests in under 200 ms including type resolution. When I test "POST sends the body", I capture what my fake fetch was called with. When I test "errors don't throw", I make my fake fetch throw. When I test "non-2xx maps to ok=false", I return a fake Response with status: 500. This one design choice — one extra parameter on one function — is the reason the project has 52 tests and no flakiness. I learned this pattern years ago from haskell-style "pass the effect in" thinking and I keep coming back to it. The thing I keep coming back to is how much this format gets right.
as a record separator, blank line as header/body boundary, @var = at top-of-file scope, # @name foo as opt-in request identity. You can describe the whole grammar in ten lines of BNF. You can write a parser for it in 200 lines without tears. You can already edit it in every modern IDE. And as of Node 20 you can execute it with zero dependencies. If you're keeping scratch API requests in your repo anyway — and you should — http-runner turns them into the smoke test you've been meaning to write. Source and Docker image on GitHub: https://github.com/sen-ltd/http-runner. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuseFetch example
GET https://example.comAccept: text/html EOF
docker run --rm -v /tmp:/work http-runner /work/api.http
Fetch example
GET https://example.comAccept: text/html EOF
docker run --rm -v /tmp:/work http-runner /work/api.http
Fetch example
GET https://example.comAccept: text/html EOF
docker run --rm -v /tmp:/work http-runner /work/api.http