Why CAPTCHAs today are so bad (and what we should be building instead)

Why CAPTCHAs today are so bad (and what we should be building instead)

Source: Dev.to

Modern CAPTCHAs are meant to stop bots, but in reality they mostly punish humans. Clicking traffic lights, rotating images, or solving puzzles breaks UX, accessibility, and flow — while advanced bots often pass anyway. The core problem isn’t implementation. It’s the assumption that users are either “human” or “bot.” Real behavior is probabilistic. Timing, cadence, input entropy, device consistency, and trajectories over time all exist in shades of gray, not absolutes. Most CAPTCHA systems hide this uncertainty. But every security decision already depends on configuration: thresholds, confidence levels, and tolerance for risk. Two companies can run the same detection logic and behave completely differently — and that’s not a bug, it’s policy. I’ve been working on an experimental project called **
**, an invisible behavioral security system that doesn’t pretend to be perfect. Instead of blocking users aggressively, it applies progressive enforcement based on configurable risk tolerance. Detection admits uncertainty, UX degrades gradually, and behavior improves over time. I’m currently exploring white-label use cases and real-world feedback. If this idea interests you or you want to discuss behavioral security:
Discord: pixelhollow Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse