AI‑Assisted Writing as Search (Not Draft Generation)

AI‑Assisted Writing as Search (Not Draft Generation)

Source: Dev.to

A quick epistemic note ## The real failure mode: committing too early ## Why “search,” specifically? ## The 5‑step loop (plus an optional 4.5) ## Step 1: Raw dump (give the system real fuel) ## Step 2: Perspective expansion (generate full drafts, not bullets) ## If you only have one model ## Step 3: Synthesis (curation, not averaging) ## Step 4: Human clarification (answer questions like an interview) ## What changed for this post (a concrete transformation) ## Step 5: Integration (write the draft you actually want to read) ## What this is optimizing for (and what it isn’t) ## Optimizing for ## Not optimizing for ## Guardrails and failure modes ## Minimum safety protocol (use even for casual posts) ## Failure mode 1: hallucinated authority ## Failure mode 2: genericness via weak grounding ## Failure mode 3: process tax ## Failure mode 4: deep technical claims without verification ## A minimal “do this tonight” recipe ## Timeboxed loop ## Closing: ship artifacts, not perfection ## Appendix A: Epistemic status taxonomy ## Appendix B: Optional Step 4.5 (“What would X say?” creator critique, experimental) ## Appendix C: Prompt shapes (rough, but runnable) I have a steady stream of ideas I genuinely want to explore. And yet most of them die in the same place: a few bullet points in a notebook, or an outline in a repo. In my case it literally looks like files named things like: They weren’t nothing. They were real attempts. But they rarely turned into something I could publish. Not because I had nothing to say but writing (for me) had become a fragile, synchronous activity: Those conditions don’t reliably exist in my life. I’m a founder with a family. Time comes in fragments. I also don’t live in a dense tech ecosystem where you get daily, high‑signal pushback by osmosis. So ideas would keep looping in my head… and I’d keep not shipping. This post is the workflow I built to fix that. It’s opinionated, but not performatively confident. And it has a simple thesis: One useful way to think about writing is as a search problem. AI is useful when it helps you explore the search space (multiple framings, objections, structures) before you commit. Promise (under my constraints): when time is fragmented and I don’t have editorial peers on tap, this workflow reliably turns “looping idea noise” into a draft I’d actually be willing to share. It does this by expanding options first, then forcing precision before polishing. Who this is for: people with ideas, limited uninterrupted time, and limited high‑signal pushback. I try to label important statements as experience, opinion, assumption, or verified. In this post, most claims are experience or opinion. When I say “this works,” I mean “this works for me under my constraints,” not “this is a universal method.” (Full taxonomy in Appendix A.) If you’re busy and you have ideas, the default “one‑draft” AI workflow is tempting: My opinion: this fails in a subtle way. It collapses the space too early. If you accept the first coherent framing you see, you miss the alternative theses you didn’t think to ask for. You also miss the objections you would have discovered in a real debate. The result may be fluent, but it’s often shallow or generic. When I say “writing as search,” I mean: Other metaphors are available: writing as sculpting (remove excess), writing as iteration (revise until good), writing as dialogue (respond to an imagined reader). I still use “search” because it emphasizes a trade‑off that matters under my constraints: backtracking is cheap before commitment. If I haven’t spent three hours polishing Draft A, it’s easier to abandon it when Draft B reveals a better thesis. Caveat: this isn’t “free.” It shifts the cost from writing to reading (and attention). You pay a reading tax to avoid paying a polishing‑the‑wrong‑draft tax. That’s the workflow: separate two modes. Here’s the pipeline I run (the folder structure in this repo mirrors it): 4.5. “What would X say?” critique (objection generation, treated cautiously) The steps sound straightforward; the point is where you put the effort. My experience is that Step 4 (human clarification) is the highest‑leverage part. It’s where I stop hand‑waving. Steps 2 and 3 are what make Step 4 possible. Pipeline in one line: Raw dump → Parallel drafts → Synthesis → Human Q&A → Draft A raw dump is not an outline. It’s not a prompt. It’s closer to a messy interview with yourself. Experience: I often start as a voice note (because typing feels like “writing,” and writing triggers perfectionism). What goes into the raw dump: Example from this post (grounded): my “raw dump” literally began as a process spec: That’s a common trap: the “workflow post about workflows.” If your raw dump is thin, everything downstream becomes generic. It is purely procedural. This is the step I used to avoid. I would ask one model for an outline, accept the first structure, then start writing. The result was always thin. It was coherent enough to publish, but it missed the alternative framings I didn’t know to ask for. Now I generate parallel drafts: multiple full essays from different models (or different prompts/lenses). Because a complete draft forces: A bullet list can hide all of that. Experience: I typically run 3–4 models. Not because it’s magic, but because it’s where I personally hit diminishing returns: fewer often converge too quickly; more rarely adds genuinely new structure for the extra reading time. What I look for when comparing drafts (practical checklist): Important: I don’t treat these drafts as “the answer.” I treat them as: If you only have access to one model, you can still do perspective expansion by forcing lenses across three passes: After the parallel drafts, I do a collapse step. Most people hear “synthesis” and imagine “merge paragraphs.” A safer claim (and my real experience): the temptation is to average everything into one polite post. That usually produces blandness. My opinion: synthesis should be opinionated. It’s selection + compression: Example from this post (grounded): my synthesis flagged the core risk: That forced a decision: the post couldn’t just be “here is a pipeline.” It needed to be grounded in the repeated event of ideas dying as outlines under my constraints. This is the step that turns “sounds right” into “is right.” After synthesis, I have the model ask me targeted questions. Not “what else should I add?” But questions that force precision: Then I answer like I’m being interviewed. Experience: this feels like an async podcast episode. It gives me the pushback I don’t get locally. Here’s a literal before → question → after chain from this post. This is why I call Step 4 “where truth enters.” The model can generate structures and objections, but it can’t supply my constraints. If I don’t answer precisely, the post becomes confidently generic. After Step 4, you have something most writing processes don’t reliably produce: a clear set of constraints and editorial decisions. Experience: I stop when I read the draft and genuinely enjoy it. What does “enjoy it” mean concretely (for me)? Three signals: If I don’t hit that bar after one loop, I often choose not to publish rather than turning it into a multi‑week project. Caveat (non‑negotiable): the evidence for this workflow is entirely self‑reported. I haven’t run controlled comparisons, I haven’t measured reader outcomes, and I don’t have reliable external feedback. The claim is “this feels better to me under my constraints,” not “this is objectively better.” Let me make the trade‑offs explicit. This workflow can help you think. It can also help you be confidently wrong. The biggest risk is that fluent text smuggles in fake certainty. Mitigations I use (experience/opinion): If you skip the event and constraints, everything becomes “AI transforms writing.” My opinion: a process‑only post is almost always less compelling than a story‑backed one. Mitigation: include an artifact trail (even a tiny one): This pipeline adds steps. And yes: the timeboxed loop below is often ~60–80 minutes. My experience/opinion is that it’s still worth it when the alternative is spending three scattered evenings polishing the wrong idea, or never shipping at all. The efficiency gain is often in discarding bad paths early, not in “typing faster.” Mitigations (experience): For highly technical writing (benchmarks, correctness proofs, security claims), this workflow can produce plausible nonsense. Opinion: treat it as idea refinement, not validation. If you need rigor, add separate work: If you want to try this tomorrow, don’t copy my whole setup. Do the smallest viable loop. Even if you don’t publish, you’ve converted a looping idea into a clearer artifact. I built this workflow because I wanted a repeatable way to turn “ideas I can’t stop thinking about” into “ideas I can actually examine.” I needed it under real constraints: language, isolation, and fragmented time. My strongest claim here is simple (opinion): use AI to explore the space before you commit. If you’re someone with too many ideas and too little uninterrupted time, try one loop. You don’t have to become a better prose stylist overnight. You just have to build a process that makes thinking possible again. I try to label important statements with one of these tags: I’ve experimented with an extra loop: simulate critique using a corpus of a creator’s past takes. I’m not sure I love it yet. So I treat it as optional. I treat it as objection generation, not truth. Non‑negotiable safety rules (opinion): Also: if this ever became more than a private tool, I’d want creator buy‑in; otherwise it can drift into “abuse” territory. Use whatever tool you like. The important part is the shape of the prompts. Perspective expansion (run 3 times): Read raw.md. Write a full essay draft. Given the 3 drafts, produce: Clarification questions: Ask me at least 15 targeted questions. Prefer questions that turn vague sentences into concrete, falsifiable statements. Write draft.md using raw.md, synthesis.md, and answers.md. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - projects/personal/ideas/2025-12-10-blog-post-second-brain-experience.md - projects/personal/ideas/2025-12-13-ai-writing-process.md - I need long, uninterrupted time to polish. - I need peers to push back in real time. - I need enough confidence in English (and honestly, even in French) to feel the result was “worth reading.” - If you already have strong editorial peers and deep uninterrupted time. - If your main goal is “prettier prose” or a polished AI voice. - If your goal is SEO/marketing outcomes. - Prompt a model. - Get a passable draft. - Edit a bit. - Publish (or don’t). - There are many plausible ways to frame an idea. - Your first framing is rarely the best one. - The work is not producing text; it’s choosing what you actually believe. - Exploration: expand the space of possible essays. - Commitment: pick a framing and make it honest. - Raw dump (fuel) - Perspective expansion (parallel drafts) - Synthesis (selection + compression) - Human clarification (where truth enters) - Integration (write the draft) - What happened (the event) and why it matters to you. - What you currently believe. - What you’re unsure about. - What you’re optimizing for (clarity? novelty? persuasion?). - Constraints (time, audience, sensitivity). - “A technical blog post about a systematic approach to writing better technical blog posts…” - “Step 1: Raw Information Dump” - “Step 2: Multi‑Model Perspective Expansion” - definitions - transitions - a conclusion - an implicit set of assumptions - Which draft makes the strongest claim (and what does it assume)? - Which draft surfaces the best objections? - Which draft has the cleanest structure (even if the content is wrong)? - Which draft feels most “alive” (specific constraints, real stakes)? - alternative framings I didn’t think of - objections I didn’t anticipate - better structure than my default - phrasing I might reuse only if it matches what I mean - Skeptic: strongest objections + missing caveats - Teacher: simplest explanation + concrete examples - Editor: structure + cuts + “what should be removed?” - Extract the strongest claims. - Surface disagreements. - Identify what needs evidence. - Propose a narrative shape that could actually carry the post. - “The missing ingredient: a concrete event… Without a real event/case study, it reads as generic advice.” - What was the triggering event? - What exactly failed in the old process? - What are you not claiming? - What would falsify this? - What are the hallucination risks? - Before (raw/spec voice): “A technical blog post about a systematic approach to writing better technical blog posts using AI models…” - Clarification question: “What was the triggering event?” - After (draft voice): “I have a steady stream of ideas… And yet most of them die… I’m a founder with a family. Time comes in fragments… I don’t live in a dense tech ecosystem…” - What changed: the post stops being a workflow diagram and becomes an explanation of why I need this workflow: it substitutes for missing pushback and makes exploration possible in fragmented time. - choose one framing (and discard others) - insert real examples where they do argumentative work - add caveats and “unknowns” instead of smoothing them away - I can read the draft without wincing at any sentence. - There’s at least one idea that surprised me. It’s something I didn’t know I thought until I wrote it. - I’d send it to someone whose judgment I respect without prefacing it with “sorry this is rough.” - Novelty (opinion): compressing an idea until there’s at least a small “new” thing inside. - Conversation (experience): getting pushback and alternate framings without needing synchronous peers. - Momentum (experience/opinion): turning “idea noise” into “idea clarity” in short, fragmented sessions. - Cost Opportunity efficiency: The whole loop, with 3 revisions, using frontier models costed me 3US$ and about an hour focused on the idea (The CLI and workflows were already built prior). I never wrote anything that fast (of reasonable quality). - Perfect prose (opinion): the writing may still feel rough or “AI‑ish.” - Automatic truth (non‑negotiable): this does not replace research, benchmarking, or fact‑checking. - Guaranteed speed/quality (non‑negotiable): sometimes exploration expands the problem space and you still decide not to publish. - Marketing outcomes (non‑negotiable): this isn’t about attention or becoming famous. - Label important claims (experience/opinion/assumption/verified). - Keep a “things to verify” list as a first‑class artifact. - No attribution (“X said…”) without a source. - For technical claims: do separate validation work (experiments/citations/peer review). - performance claims (“X is fastest”) without benchmarks - invented timelines - misattributed quotes without sources - Keep epistemic labels. - Maintain a “things to verify” list. - Prefer narrow, attributable statements. - raw dump excerpt - contrasting draft theses - synthesis bullets (tensions, missing evidence) - what changed after clarification - Timebox sessions to your real life (kid asleep, late night). - Stop after one loop if it’s “good enough.” - Save artifacts so future posts get easier. - experiments - reproduction steps - peer review - 10 minutes: voice‑note raw dump what you believe what you’re unsure about what you’re optimizing for - what you believe - what you’re unsure about - what you’re optimizing for - 10–20 minutes: generate 3 full drafts (different models, or the same model with different forced lenses) - 10 minutes: synthesize best claims disagreements “needs evidence” list - best claims - disagreements - “needs evidence” list - 30–45 minutes: answer clarification questions like an interview (voice works well) - Stop: when you hit your “enjoy it” bar; publish or save as an internal memo - what you believe - what you’re unsure about - what you’re optimizing for - best claims - disagreements - “needs evidence” list - Use AI for expansion (parallel drafts), not for authority. - Use synthesis to curate, not average. - Use human clarification to keep yourself honest. - Experience: something I personally observed/did. - Opinion: a value judgment or preference. - Assumption: plausible but not verified. - Verified: backed by a link, benchmark, or citation. - No attribution without a link/source. - Explicitly call out context mismatch and possible staleness. - Human must approve/reject anything incorporated. - Pick a thesis and make it explicit in the first 10 lines. - Define 3 key terms you’re using. - Include at least 3 objections (steelman them). - Flag any claim that needs evidence. - If you’re about to generalize (“most people…”), rewrite as the author’s experience or delete. - Keep epistemic honesty: don’t invent facts. - the best 3 thesis options - the strongest claims (with which draft they came from) - disagreements/tensions - “needs evidence” list - 10 clarification questions for the human - one recommended narrative spine (and why) - Choose one framing. - Add explicit caveats. - Keep what is true; remove what is only plausible. - Do not add quotes or attributions without sources.