Tools
Tools: I Built Digg X: A Chrome Extension That Turns X Bookmarks into a Searchable AI Research Inbox
2026-02-17
0 views
admin
What Digg X does ## Product architecture ## Key implementation details ## 1) Bookmark interception with fallback extraction ## 2) Structured sync API for ingestion ## 3) Side panel AI profile analysis ## 4) Practical usage limits for free users ## Why Cloudflare for the backend ## Lessons learned ## 1) "Capture quality" matters more than model quality ## 2) Build graceful degradation everywhere ## 3) Keep the first action incredibly fast ## 4) Don't over-centralize too early ## What's next I built Digg X because I had a personal problem: I was bookmarking great posts and threads on X/Twitter every day, but I couldn't reliably turn them into reusable knowledge. So I built a workflow-first Chrome extension: Chrome Web Store:
Digg X - The Social Scout At a high level, Digg X is a social intelligence layer on top of X: I wanted this to feel native to browsing, not like "yet another dashboard." Digg X is a monorepo with three main apps: This split worked well: When a user clicks X's bookmark button, the extension intercepts that action, tries to extract structured data from bridge/JSON first, then falls back to DOM parsing if needed. This reduced breakage when X's UI changed. The extension sends normalized content to the backend sync endpoint: On the backend, this lands in D1 and object storage, with queue-based async processing for heavier tasks. Digg X includes an analyzer flow in the side panel. It attempts API-based analysis first, and falls back to UI automation if the API route fails. This "API-first, UI-fallback" strategy made the feature significantly more resilient. I added a lightweight local quota mechanism for free users (currently 3 analyses/day). It keeps costs controlled while still giving users a real experience. I chose Cloudflare because the stack maps well to this product shape: For a browser-first tool with many small sync operations, this setup has been cost-effective and operationally simple. If extraction is noisy, downstream AI outputs are noisy. The biggest wins came from improving ingestion and normalization. Social platforms change often. Fallbacks (JSON -> DOM, API -> UI) are not optional in extension products. If saving content feels slow, users stop doing it. One-click capture with immediate feedback was critical. Keeping extension, API, and web responsibilities separate made iteration faster and safer. I'm currently focused on: If you're building dev tools, research tools, or browser extensions, I'd love to compare notes. Try Digg X:
Chrome extension
Web app Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
// Simplified from bookmark scanner logic
if (tweetId) { const tweetResult = await getTweetDataFromBridge(tweetId) if (tweetResult) data = MarkdownExtractor.extractFromTweetResult(tweetResult)
} if (!data) { data = MarkdownExtractor.extractFromPage(bookmarkButton as HTMLElement)
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
// Simplified from bookmark scanner logic
if (tweetId) { const tweetResult = await getTweetDataFromBridge(tweetId) if (tweetResult) data = MarkdownExtractor.extractFromTweetResult(tweetResult)
} if (!data) { data = MarkdownExtractor.extractFromPage(bookmarkButton as HTMLElement)
} CODE_BLOCK:
// Simplified from bookmark scanner logic
if (tweetId) { const tweetResult = await getTweetDataFromBridge(tweetId) if (tweetResult) data = MarkdownExtractor.extractFromTweetResult(tweetResult)
} if (!data) { data = MarkdownExtractor.extractFromPage(bookmarkButton as HTMLElement)
} CODE_BLOCK:
PUT /api/v2/sync/inbox Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
PUT /api/v2/sync/inbox CODE_BLOCK:
PUT /api/v2/sync/inbox CODE_BLOCK:
try { const result = await executeViaAPI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} catch { const result = await executeViaUI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
try { const result = await executeViaAPI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} catch { const result = await executeViaUI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} CODE_BLOCK:
try { const result = await executeViaAPI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} catch { const result = await executeViaUI(handle, prompt, lang, pluginI18n, selectors) return mergeWithDomData(result, domData, interceptedData)
} - Capture what matters while you browse X
- Sync it into a structured cloud inbox
- Run AI analysis on top of clean, versioned content - Extract content from posts/threads when you bookmark
- Convert raw content into Markdown + metadata
- Sync everything to a backend inbox (/api/v2/sync/inbox)
- Build profiles/insights around creators and content - digg-x-ext: Chrome extension (Plasmo + React)
- digg-x: Cloudflare Worker backend (Hono + D1 + KV + R2 + Queue)
- digg-x-web: Web app (Vite + React Router + Pages Functions) - Extension is optimized for capture and browser context
- Worker backend is optimized for ingestion, sync, and APIs
- Web app is optimized for browsing, organization, and reports - Worker for fast API edge runtime
- D1 for relational query patterns
- KV for hot cache
- R2 for content/media blobs
- Queue for async enrichment - Better semantic retrieval across saved items
- Stronger creator-level insight scoring
- More automation around weekly "research brief" generation
how-totutorialguidedev.toaimlrouter