Tools: Proof server and Indexer: how Midnight processes transactions (2026)
Introduction
Prerequisites
First-run behavior
Confirming it's working and active
Wiring it to your DApp
Docker setup for local development
Running the proof server
Running the Indexer
Use the hosted Indexer instead
Errors you'll actually hit
Docker tags and version pinning
Why alignment matters
How to check
Best practices
Querying the Indexer with GraphQL
Query 1: get the latest block
Query 2: fetch a specific block by height
Query 3: current epoch information
Pro tip: schema introspection
What's available
WebSocket subscriptions for real-time updates
Available subscriptions
Connecting with WebSocket
indexerPublicDataProvider vs. direct GraphQL
The SDK approach
How they relate
Wrapping up A hands-on walkthrough of Midnight's two core infrastructure components: the proof server that generates zero-knowledge proofs locally, and the Indexer that makes on-chain data queryable via GraphQL. This tutorial is aimed at developers who are new to Midnight and want to understand how transactions are processed behind the scenes. When you build a DApp on Midnight, two pieces of infrastructure do most of the heavy work behind the scenes: the proof server and the Indexer. One handles the privacy side, the other handles the data side. You need both or nothing works. Understanding both is the difference between a DApp that works and a DApp that fails suddenly. The proof server is the reason your private data stays private on Midnight. It runs locally on your machine, takes the ZK circuits produced by your Compact contract, combines them with your private inputs, and gives out a zero-knowledge proof. That proof is what gets submitted on-chain, not your data. The Indexer handles the other direction. It watches the blockchain, parses every block and transaction, and exposes that data through a GraphQL API. Anything your DApp needs to read from on-chain contract state, transaction history, epoch info flows through the Indexer. In this tutorial we'll walk through what each component does, set them both up with Docker, talk about the version pinning that trips up most newcomers, and send real queries to the Indexer's GraphQL endpoint. By the end you'll have a working local stack and a mental model for how a Midnight transaction actually moves from your wallet to the chain and back, so walk with me, let's go. Before you begin, make sure you have: Midnight transactions are different from what you're used to on Ethereum or Solana. There's no signature in the usual sense. Instead, a transaction carries a zero-knowledge proof, a compact proof that basically says "the computation described by this contract was executed correctly using valid private inputs" without revealing what those inputs actually were, cool right? That proof doesn't just appear out of nowhere though. It requires the ZK circuits generated when your Compact contract is compiled, the verification keys that describe the circuit shape, and your actual witness data (balances, secrets, whatever the contract needs). The proof server is the process that takes all of that and produces the final zk-SNARK. The important design choice here: it runs locally. Your private inputs never leave your machine. The server is a Docker container you run yourself, and the Midnight.js SDK talks to it over HTTP, i guess you understand all about proof servers now. The first time you start the proof server, it has to fetch some artifacts. You'll see logs like this: That's the server pulling down the ZK verification keys and ZKIR (Zero-Knowledge Intermediate Representation) source files from Midnight's S3 bucket. Integrity is checked before anything is used. If a file's hash doesn't match, the server refuses to start. Once the download is done and caching is complete, you'll see: That's an Actix web server (Rust-based, very fast) spinning up with four worker threads on port 6300. This is the endpoint the SDK will hit when it needs a proof generated. A quick server check for our local host: If you get that response, the server is ready to accept proof requests. In your Midnight.js code, the SDK wrapper that talks to the proof server is httpClientProofProvider: That's it. From there, every time your DApp submits a transaction, the SDK bundles up the circuit + witness, sends it to localhost:6300, waits for the proof, and attaches it to the unsigned transaction before submission. Both the proof server and the Indexer ship as Docker images. If you don't already have Docker on your machine, get that sorted first. For production, you can run your own Indexer and proof server on dedicated infrastructure, or use Midnight's hosted endpoints for Preview, Preprod, and Mainnet. On Ubuntu (I'm on 24.04): Verify with docker --version and docker run hello-world before going further. If docker run hello-world gives you a "permission denied" error, the group change hasn't taken effect yet. The newgrp docker above usually fixes it without a full logout. A few things worth calling out: For a fully local setup, run the standalone Indexer image: The APP__INFRA__SECRET is required. It is used to encrypt sensitive data the Indexer stores internally. Generating it with openssl rand -hex 32 gives you a clean 256-bit hex string. By default the standalone Indexer connects to a local Midnight node at ws://localhost:9944, so if you want a fully self-contained stack you'll also need a node running. For most DApp development that's overkill. For development work, the simplest approach is to skip the standalone Indexer entirely and hit Midnight's hosted endpoints: This is the single most important thing to get right, and also the easiest to get wrong. The proof server tag MUST match the Ledger version. Here's the current compatibility matrix at the time of writing: The proof server generates proofs against a specific circuit format. The Ledger (the on-chain state machine) defines how those proofs are verified. Both sides have to agree on the exact format, verification keys, and field layout. If they don't, one of two things might happen: Before pulling any image, check the official support matrix: https://docs.midnight.network/relnotes/support-matrix Now for the fun part. The Indexer's GraphQL API is where your DApp or a debugger reads on-chain data. Let's send some real queries to the Preview network endpoint and walk through what comes back. Fun fact: my first query used blocks instead of block and the API corrected me. The error messages are actually helpful. Breaking down what you get back: Pass an offset argument to target a specific block. Here's the genesis block: This also pulls in the transactions in that block, and for each transaction the contract actions it triggered. Running it against Preview returned the genesis block with its initial bootstrapping transaction. Result showed epochNo: 986757, durationSeconds: 1800 (a 30-minute epoch), and whatever elapsedSeconds had accumulated by the time of the call. Handy when you're building anything that cares about staking cycles or time-based contract logic. Don't memorize the schema. Ask for it: That returns every available top-level query along with its description. I do this every time I'm exploring a new version of the Indexer. Top-level queries include: Useful transaction fields: id, hash, protocolVersion, raw, block, contractActions, unshieldedCreatedOutputs, unshieldedSpentOutputs, zswapLedgerEvents, dustLedgerEvents. Polling the Indexer for new data works but burns bandwidth and adds latency. For anything live (a wallet UI that updates when funds arrive, a dashboard that streams blocks, a DApp that reacts to contract state changes) you want subscriptions. The Indexer's GraphQL endpoint also accepts WebSocket connections, and the schema exposes a set of subscriptions you can tap into. Discovered via schema introspection: The difference between polling and subscriptions looks small until you're running it at scale: Here's the shape of a minimal subscription client: I haven't tested this WebSocket connection myself yet, so verify the protocol your Indexer version expects before using this in production. A note on the protocol: some GraphQL WebSocket servers use the legacy subscriptions-transport-ws protocol (which is what the snippet above speaks), and some use the newer graphql-ws protocol, which uses slightly different message types (connection_init, subscribe, next). If the simple version doesn't work on your setup, check which protocol the endpoint expects and adjust the handshake accordingly. In practice, if you're using indexerPublicDataProvider from the SDK (which we'll cover next), all of this is handled for you. You now have two ways to read Indexer data: through the Midnight.js SDK, or by hitting the GraphQL endpoint directly. Both are valid; they're useful in different situations. To be honest, if you're just starting out, the direct GraphQL approach is easier to understand because you can see exactly what's happening. The important thing to note: indexerPublicDataProvider is a wrapper around the same GraphQL API. Under the hood, the SDK is sending the same queries you'd send by hand. It just wraps them in a typed, cleaner interface that plays well with the rest of the Midnight.js ecosystem. So everything you learn from running raw GraphQL queries still helps you when you use the SDK later. Time spent poking at the GraphQL endpoint with curl makes you a better SDK user, because you develop intuition for what the SDK is actually doing. And if you ever need to step outside the SDK to build tooling, to debug a weird state, to automate something, you already know the shape of the API. The proof server and the Indexer are the two halves of how a Midnight DApp interacts with the network: Share your feedback on X with #MidnightforDevs Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
What the proof server actually does - The -v flag on midnight-proof-server enables verbose logging. Keep it on while you're learning. When something goes wrong, the extra output tells you exactly where.
- This command occupies your terminal. Open a new tab for everything else.- First run pulls the image and downloads the ZK artifacts, so it takes several minutes. Subsequent runs are fast because Docker caches the image and the artifacts persist inside the container or probably in a volume if you mount one. - Preview: https://indexer.preview.midnight.network/api/v4/graphql- Preprod: https://indexer.preprod.midnight.network/api/v4/graphql- Mainnet: https://indexer.mainnet.midnight.network/api/v4/graphqlThese are the same Indexer code, just running against Midnight's test and production networks. You get a fully-synced indexer for free, which is great when you're prototyping. - permission denied while trying to connect to the Docker daemon socket: your user isn't in the docker group yet. Run the usermod and newgrp commands above.- bind: address already in use on port 6300: something else is already bound to that port, or a previous container is still running. docker ps to find it, docker stop <container-id> to kill it.- Cannot connect to the Docker daemon: the daemon isn't running. sudo systemctl start docker.Those are some quick fixes to the issues i encountered while setting up. - The proof is rejected outright when your transaction hits the chain.- Worse, the transaction silently fails in a way that's very hard to debug, because the proof itself looked structurally fine but encoded assumptions the ledger no longer holds.You don't want to be debugging that at 2am late night XD. Just pin the versions. - Never use :latest. Your setup might work today and break tomorrow for no obvious reason and you'll ship bugs that only appear on some machines.- Keep a note of your working combination in your repo's README. When a teammate clones the project six months from now, that one line saves them an afternoon of confusion honestly XD.- Pin every component together. When you upgrade the Ledger also upgrade the proof server, the Node, and update your Compact compiler. - hash: the unique identifier for this block.- height: block number, basically a counter that keeps going up.- timestamp: Unix time in milliseconds.- protocolVersion: which version of the Midnight protocol this block was produced under. Useful for detecting upgrades.- author: the validator (stake pool operator) who produced the block. - block: get a block by hash or height (latest if no offset).- transactions: look up transactions by hash or identifier.- contractAction: fetch contract actions by contract address.- currentEpochInfo: current epoch number and timing.- spoCount: number of stake pool operators.- stakeDistribution: stake distribution across validators.- Plus dustGenerationStatus, dParameterHistory, and others.Useful block fields: hash, height, protocolVersion, timestamp, author, ledgerParameters, parent, transactions, systemParameters. - blocks: subscribe to new blocks as they arrive, with an optional starting offset.- contractActions: stream contract actions filtered by contract address.- shieldedTransactions: shielded transaction events for a given session ID.- unshieldedTransactions: unshielded transaction events for a given address.- zswapLedgerEvents: ZSwap ledger events.- dustLedgerEvents: DUST ledger events.
Why this matters - Polling: "anything new?" → no. "anything new?" → no. "anything new?" → yes, here. Every poll is a request, whether or not there's data.
- Subscription: you ask once, the Indexer pushes data to you whenever there's something to say.For a block explorer or a live wallet view, this is the difference between a smooth UI and one that either lags or hammers the Indexer. - A type-safe TypeScript interface: autocomplete, compile-time checks, the works.- Clean integration with the rest of Midnight.js. deployContract() and findDeployedContract() both use this provider internally.- Automatic serialization and deserialization: on-chain byte blobs become usable objects.- Managed WebSocket subscription lifecycle: no manual reconnect logic.
The direct approach - Full control over the exact query shape.
- Zero dependency on TypeScript or Node.js. You can hit the endpoint from Python, Go, Rust, a shell script, or Postman.- A fast debugging loop: no rebuild, no bundler, just a curl.- Freedom to build tools that don't fit the SDK's assumptions (custom analytics, block explorers, monitoring).
Picking between them - proof server: privacy side. Generates ZK proofs locally so your private data never leaves your machine.
- Indexer: data access side. Makes on-chain state queryable via GraphQL, with WebSocket subscriptions for real-time updates.Before you go, remember: pin your Docker tags and check the support matrix religiously, use indexerPublicDataProvider for building DApps and direct GraphQL for debugging, and use schema introspection whenever you want to explore what the Indexer can do. - Midnight docs: https://docs.midnight.network/getting-started- Support / compatibility matrix: https://docs.midnight.network/relnotes/support-matrix- Midnight Discord: https://discord.com/invite/midnightnetworkFrom here, check out the official tutorials for building your first contract on Midnight.