Tools: Update: How I Deployed a Live Blockchain Node (ARC) on AWS EC2 - A Complete Step-by-Step Guide

Tools: Update: How I Deployed a Live Blockchain Node (ARC) on AWS EC2 - A Complete Step-by-Step Guide

Introduction

Architecture Overview

Part 1: Setting Up the AWS EC2 Instance

Part 2: Installing Required Tools

Part 3: Starting the Node

Part 4: Configuration Changes Made

Part 5: Final Working State

Part 6: Load Testing the Node

Conclusion

Docker

Networking

Debugging This article documents a complete, real-world deployment of an Arc blockchain node on AWS EC2. Unlike tutorials that show only the happy path, this guide captures every error encountered, explains why it happened, and shows exactly how it was fixed.

By the end of this guide you will have a fully operational blockchain node with 5 validators, a block explorer, and a complete monitoring stack running on AWS. The full stack consists of the following components running in Docker containers on a single EC2 instance:Arc Consensus Node (arc_consensus) = 5 validator nodes + 1 full nodeArc Execution Node (arc_execution) = EVM-compatible execution layerBlockscout = blockchain explorer with PostgreSQL databaseNginx = reverse proxy routing traffic to BlockscoutPrometheus = metrics collection from all servicesGrafana = visualization and dashboardscAdvisor + Node Exporter = container and system metrics 1.1 Choosing the Right Instance Type Building and running a blockchain node is resource-intensive. The wrong instance size will cause build failures or poor performance. The recommended configuration is:Instance Type = t3.xlarge or better (Rust compilation needs 4+ vCPUs)vCPUs 4 = Parallel Docker buildsRAM 16 GB = Multiple containers + DBStorage (EBS) = 100 GB SSD (gp3) (Docker images + chain data)OS = Ubuntu 22.04 LTS Important = Using a t3.medium (2 vCPU, 4GB) will cause the Rust compilation to run out of memory and fail after 30-60 minutes. 1.2 Configuring Security Group Inbound Rules After launching the instance, configure the Security Group to allow external access to required ports: Important = Opening only port 80 is not enough. Grafana (3000) and Prometheus (9090) need their own inbound rules. 2.1 Connect to Your EC2 Instance ssh -i your-key.pem ubuntu@your-ec2-public-ip 2.2 Clone the Arc Node Repository cd ~git clone https://github.com/circlefin/arc-nodecd arc-nodegit submodule update --init --recursive Important - The submodule step may take several minutes. Do not interrupt it. 2.3 Install System Dependencies sudo apt-get updatesudo apt install docker.io make nodejs npm libclang-dev -ysudo service docker startsudo usermod -aG docker $USERNote - After adding yourself to the docker group, fully close and reopen the terminal for the change to take effect. 2.4 Install Node.js 22 The system Node.js version is outdated. Version 22 is required:sudo npm install -g nsudo n 22hash -r curl -L https://foundry.paradigm.xyz | bashsource ~/.bashrcfoundryup -i v1.4.4 Note - If foundryup is not found after source ~/.bashrc, fully close and reopen the terminal, cd back into arc-node, and run foundryup -i v1.4.4 again. 2.6 Update Docker Compose The system Docker Compose version is incompatible with Arc node. Install v2.24.0 manually:sudo mkdir -p /usr/local/lib/docker/cli-pluginssudo curl -SL https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-linux-x86_64 -o /usr/local/lib/docker/cli-plugins/docker-composesudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shWhen prompted, type 1 and press Enter to proceed with the default installation.source $HOME/.cargo/env 2.8 Install npm Dependencies cd ~/arc-nodenpm install cd ~/arc-nodemake testnetOn the first run, Arc compiles its Rust source code inside Docker. This takes 60 - 180 minutes. The system will be under heavy load. This is completely normal (do not interrupt the process). Note : If the build fails partway through, run make testnet again. Docker caches completed layers so it will resume from where it left off. 3.2 Verify the Node is Running docker psYou should see the following containers running:• validator1_cl, validator2_cl, validator3_cl, validator4_cl, validator5_cl• validator1_el, validator2_el, validator3_el, validator4_el, validator5_el• full1_cl, full1_el• blockscout-backend, blockscout-frontend, blockscout-proxy• blockscout-db 3.3 Start the Monitoring Stack Grafana and Prometheus are in a separate compose file and must be started independently:docker compose -f /home/ubuntu/arc/arc-node/.quake/monitoring/compose.yaml up -d Important - The monitoring stack is not included in make testnet. 4.1 blockscout.yaml = Frontend API Host File: arc-node/deployments/blockscout.yamlBefore (broken on remote servers)NEXT_PUBLIC_API_HOST: localhostNEXT_PUBLIC_APP_HOST: localhost AfterNEXT_PUBLIC_API_HOST: NEXT_PUBLIC_APP_HOST: 4.2 compose.yaml = Network Configuration File: arc-node/.quake/localdev/compose.yamlBeforeblockscout: driver: bridge internal: true # blocks backend from reaching chain RPC Afterblockscout: driver: bridge internal: false 4.3 monitoring/compose.yaml = Grafana User and Ports File: arc-node/.quake/monitoring/compose.yamlBeforeuser: '501'ports: Afteruser: '472'ports: 4.4 prometheus.yml = Correct Scrape Targets With the node fully running, test it by sending real transactions: make testnet-load RATE=10 TIME=30 This sends 10 transactions per second for 30 seconds, a total of 300 transactions across all 5 validators. The output confirms successful transaction delivery:30.067s: Total sent 303 txs (35752 bytes), 10.1 tx/s After running the load test, refresh the Blockscout explorer at http:/// to see the transactions appear in real time. 1) Bind mount host paths must exist before docker compose up so Docker does not create them2) Container-to-container communication uses internal ports, not host-mapped ports3) internal: true on a network isolates ALL external access including inter-service calls4) Each service runs as a specific UID so always chown data directories to match 1) Frontend environment variables like NEXT_PUBLIC_API_HOST are resolved by the browser, not the server2) Always use the public IP for any variable that the browser reads3) Opening port 80 in a Security Group does NOT open 3000 or 9090 so each needs its own rule 1) Read docker logs carefully - every crash has an exact error message2) Port scan with curl to find actual metrics endpoints instead of guessing3) Use docker exec ss -tlnp to see what a container is actually listening on4) A cascade failure (many errors at once) usually has one root cause so find the first error Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or - 127.0.0.1:3000:3000 - 0.0.0.0:3000:3000 - job_name: 'validators'static_configs: targets: 'host.docker.internal:9101''host.docker.internal:9201''host.docker.internal:9301'- targets: 'host.docker.internal:9101''host.docker.internal:9201''host.docker.internal:9301'- 'host.docker.internal:9101'- 'host.docker.internal:9201'- 'host.docker.internal:9301' - targets: 'host.docker.internal:9101''host.docker.internal:9201''host.docker.internal:9301'- 'host.docker.internal:9101'- 'host.docker.internal:9201'- 'host.docker.internal:9301' - 'host.docker.internal:9101'- 'host.docker.internal:9201'- 'host.docker.internal:9301'