Tools: Update: PostHog Self-Hosted Review: Worth Running Yourself?

Tools: Update: PostHog Self-Hosted Review: Worth Running Yourself?

What you actually get with self-hosted PostHog

Deployment reality: Docker is easy; operations are not

Data control, compliance, and why teams self-host

A practical snippet: capture events with PostHog (and avoid noisy data)

PostHog vs mixpanel, amplitude, hotjar, fullstory: when to pick what

Final verdict: who should self-host PostHog (and who shouldn’t) The phrase posthog self hosted review usually signals one thing: you like PostHog’s product analytics, but you don’t want a SaaS bill (or you can’t ship user data to someone else’s cloud). I’ve run PostHog self-hosted in production, and this is the no-fluff take: it’s powerful, flexible, and sometimes fiddly—especially once your event volume stops being “startup small.” Self-hosting PostHog isn’t just “analytics on your own server.” It’s a bundle of capabilities that, together, can replace a small stack: The pragmatic upside: you can consolidate tools. The pragmatic downside: consolidation means more surface area to operate. Where PostHog tends to beat mixpanel and amplitude for self-hosting-minded teams is the “single platform” feel: you’re not stitching together product analytics + replay + flags + experiments across different contracts and SDKs. Most people start with Docker Compose and think, “That was painless.” That’s true—for a demo or a low-traffic app. Where self-hosting gets real: My opinion: if you’re already comfortable running stateful services (Postgres, Redis, Kafka, ClickHouse), self-hosting PostHog is reasonable. If not, it can become a “why is analytics paging us?” situation. The best reason to self-host is data governance. If you have strict requirements (or just a strong internal stance), keeping raw event data and replays inside your environment simplifies a lot of conversations. Self-hosting also gives you sharper control over: It’s also a hedge against tool sprawl: you can run analytics the way you run the rest of your platform. A concrete example: if you’re building a fintech-ish workflow where user behavior is sensitive, session replay can be risky unless you have strict masking. With self-hosting, you can enforce that policy centrally rather than hoping every engineer remembers to sanitize on the client. Here’s a minimal JavaScript example that captures a signup event with properties, while keeping the payload intentional. The key is: don’t dump everything. Capture what you’ll query. Actionable tip: define a short event taxonomy (10–30 core events) before you turn on autocapture or replay. Otherwise, you’ll drown in entropy and still not answer basic product questions. These tools overlap, but their “center of gravity” differs. Opinionated take: if session replay is your primary need, evaluate hotjar or fullstory first because their workflows are built around it. If product analytics is primary and replay is secondary, PostHog’s “good enough replay + strong analytics” combo is compelling. Self-hosting PostHog makes sense when you have at least one of these: Skip self-hosting if you: If you’re already running modern analytics infrastructure, PostHog self-hosted can be a pragmatic, developer-friendly choice—especially if you’d otherwise pay for multiple tools. If you’re not there yet, starting with a hosted offering (from PostHog or a SaaS-focused alternative) can be the calmer move, then migrate when data control or cost makes it worth it. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

import posthog from 'posthog-js' posthog.init('YOUR_PROJECT_API_KEY', { api_host: 'https://posthog.yourdomain.com', capture_pageview: false, // avoid noisy defaults if you track routes yourself }) export function trackSignup({ plan, source }) { posthog.capture('signup_completed', { plan, source, // Avoid sending PII like email unless you have a strong reason + policy timestamp: new Date().toISOString(), }) } // Later, after identifying a user: posthog.identify('user_123') import posthog from 'posthog-js' posthog.init('YOUR_PROJECT_API_KEY', { api_host: 'https://posthog.yourdomain.com', capture_pageview: false, // avoid noisy defaults if you track routes yourself }) export function trackSignup({ plan, source }) { posthog.capture('signup_completed', { plan, source, // Avoid sending PII like email unless you have a strong reason + policy timestamp: new Date().toISOString(), }) } // Later, after identifying a user: posthog.identify('user_123') import posthog from 'posthog-js' posthog.init('YOUR_PROJECT_API_KEY', { api_host: 'https://posthog.yourdomain.com', capture_pageview: false, // avoid noisy defaults if you track routes yourself }) export function trackSignup({ plan, source }) { posthog.capture('signup_completed', { plan, source, // Avoid sending PII like email unless you have a strong reason + policy timestamp: new Date().toISOString(), }) } // Later, after identifying a user: posthog.identify('user_123') - Product analytics (events, funnels, retention, cohorts) - Autocapture (clicks, pageviews, elements) if you want it - Session replay (similar category to hotjar and fullstory) - Feature flags + A/B testing (a huge plus if you don’t want to add another vendor) - Data warehouse exports / pipelines (depending on your setup) - ClickHouse management: PostHog leans heavily on ClickHouse. It’s fast and great for analytics, but you’re now responsible for performance, disk, and backups. - Storage growth: events + replays can balloon. Replays are the silent killer. - Upgrades: PostHog ships quickly. Staying current is good for security and features, but it means you need a process. - Observability: you’ll want dashboards/alerts (CPU, memory, disk IO, ClickHouse latency, ingestion lag). - PII handling: decide what to capture, mask, or drop at ingestion - Retention policies: enforce time limits for events and replays - Network boundaries: private subnets, VPC peering, internal-only dashboards - PostHog (self-hosted): best if you want an integrated stack (analytics + flags + experiments + replay) and you’re willing to run infra. - mixpanel: excellent product analytics UX and reporting polish. If you want fewer operational concerns and can use SaaS, it’s often faster to value. - amplitude: strong for advanced behavioral analytics and org-scale governance—again, typically chosen as a SaaS-first bet. - hotjar: great for lightweight qualitative insight (heatmaps, feedback) rather than deep event modeling. - fullstory: session replay leader vibe; powerful but can be pricey and replay-heavy by design. - Compliance or customer requirements that push you away from SaaS - An infra team that’s comfortable operating ClickHouse + backups - A desire to unify analytics, feature flags, and experiments - Don’t have time to own upgrades, storage tuning, and on-call risk - Only need a small slice of analytics and want the simplest path