Tools: Building a Developer-First Cloud Storage Platform on Azure Blob (Lessons Learned)

Tools: Building a Developer-First Cloud Storage Platform on Azure Blob (Lessons Learned)

Source: Dev.to

When you build apps long enough, you eventually run into file storage. ## Why Not Just Use S3 Directly? ## Architecture Overview ## Handling Large Uploads Without Blowing Memory ## Private by Default ## Real-Time Upload Progress ## Metadata Is Everything ## Performance Considerations ## Building the UI Layer ## What's Next ## Quick Feature Snapshot ## I'm Curious User uploads. Media previews. Private downloads. Public sharing. Expiring links. Access control. On paper, services like S3 and Firebase Storage solve this. In practice, I kept running into friction: So I decided to build a storage layer from scratch on top of Azure Blob Storage and document what I learned along the way. This eventually became FileFreak.io, but the more interesting part is the architecture and tradeoffs. But most applications don't need the full flexibility of S3. They need: The problem is not raw storage. It's everything around it. Azure Blob handles durability and scale. The backend handles logic, security, and developer ergonomics. One of the biggest mistakes I made early on was buffering too much data in memory. The correct approach is fully streaming uploads: Streaming architecture matters more than people think. Storage systems tend to default toward public access or complicated ACLs. Instead, I designed the system around: Security is easier to reason about when the default state is locked down. Instead of polling for upload status, I used WebSockets to emit progress events during streaming uploads. This significantly improves UX compared to traditional form-based uploads. It's a small detail, but it makes the system feel modern. Blob storage is not your database. Every file is paired with structured metadata stored in MSSQL: Storage handles durability. The database handles logic. Trying to overload blob metadata quickly becomes painful. Azure Blob hot tier pricing is attractive at around $0.02 per GB. But storage cost is rarely the main expense. The real considerations are: Optimizing for streams instead of buffers made a measurable difference in memory stability under load. On the frontend, I focused on: The goal was to reduce friction, not increase flexibility. Most apps don't need 500 configuration flags. They need reliability and clarity. The next phase is exposing the storage layer as: The interesting question is not whether storage exists. It's whether developers want maximum flexibility or sensible defaults with guardrails. For those of you who've built apps involving file uploads: If you're curious about what this evolved into, it became FileFreak.io. But the architecture lessons were the real takeaway. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: req.pipe(blockBlobClient.uploadStream()); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: req.pipe(blockBlobClient.uploadStream()); CODE_BLOCK: req.pipe(blockBlobClient.uploadStream()); - Overly complex permission models - Confusing bucket structures - Boilerplate-heavy integrations - Public/private edge cases - Performance surprises with large files - Secure uploads - Private-by-default storage - Controlled sharing - Clean metadata management - Reliable streaming - Backend: Node.js with streaming-based request handling - Storage: Azure Blob Storage (hot tier) - Database: MSSQL for metadata and access control - Auth: JWT sessions + Argon2 password hashing - Realtime: WebSockets for upload progress tracking - Never buffer entire files in memory - Respect backpressure - Destroy streams properly on error - Handle partial uploads cleanly - Watch for ERR_STREAM_WRITE_AFTER_END issues - All files private by default - Signed access for downloads - Public links as explicit opt-in - Optional password protection - Expiration timestamps stored in metadata - Folder hierarchy - Access level - Public token (if generated) - Expiration time - Egress bandwidth - API compute - Streaming efficiency - Database load - File preview handling - A file explorer-style dashboard - Drag-and-drop uploads - Nested folders - Trash and restore - In-browser previews for images, videos, and PDFs - Secure sharing links - A public API - SDKs for easy integration - Programmatic file management - Expanded permission controls - Private-by-default cloud storage - Fast uploads with real-time progress - Streaming uploads and downloads - In-browser previews - Folder organization and trash restore - Secure public sharing links with optional password protection and expiration - Public developer API - SDKs for integration - Enhanced team permissions - What's the most frustrating part? - Is the pain UX, permissions, pricing, performance, or something else? - Would you trade flexibility for simplicity?