Tools: Your AI Just Said “I Can’t do that Dave.” (2026)
Project Context
Lateral Thinking — NEVER SAY "I CAN'T"
Known Credential Map How skill files turn a wall-hitting assistant into a lateral thinker, and why most setups are wiring the wrong thing.
15 min read · 4 parts · Published by Vektor Memory Part 1: The WallIt started with an email in the morning before my chai tea kicked in… Not the fun kind. A Google Search Console notification, the kind that lands in your inbox with the quiet menace of a parking ticket you didn’t know you’d earned. Subject line: “New Coverage issue detected.” Six pages. Blocked. 403 errors. Googlebot — the one crawler you actually want on your site — had been turned away at the door. Three times. You’ve submitted the validation request twice already, so annoying. Both times Google came back, tried to crawl, got a 403, and left. The third submission is sitting there, waiting. Your patience is doing the same. So you do what any reasonable person does at this point: you open your AI assistant and ask it to diagnose the problem, with a hasty copy paste snippet of the issue, that should fix it. The assistant looks at the Search Console screenshot. It reasons through the possibilities. It considers nginx configs, server blocks, robots.txt entries, HTTP response codes. It is, by any measure, thinking hard. “I’m unable to directly access your Cloudflare dashboard to inspect the firewall rules. You may want to check the Security settings manually.” You stare at that sentence for a moment. You read it again. You feel something between frustration and genuine bewilderment, because you know — you know — that the answer is in Cloudflare. The VPS logs are clean. Nginx is serving 200s to everything that reaches it. The block is happening upstream, at the Cloudflare layer, before requests even touch the server. And you also know, somewhere in the back of your mind, that there is a Cloudflare API token sitting in your Aes-256 credential vault. You stored it there yourself, months ago. The assistant has access to that vault. It has tools to run curl requests from the VPS. It has a Tailscale connection to your dev machine. It has, in short, at least three completely viable paths to the answer. It found zero of them. It hit a wall and reported the wall. What it should have said: “I’ll check this via the Cloudflare API — I have a token in the vault. Going now.” Four minutes later, it would have found it: security level set to high, browser integrity check switched on. That last one is the culprit — it serves a JavaScript challenge to unrecognised visitors, and Googlebot cannot solve a JavaScript challenge. Every crawl attempt: 403. Three submissions to Search Console. Weeks of indexing delay. Two API calls to fix. One to set security level to medium. One to turn off the browser integrity check. Done. The fix was trivial. The path to the fix was invisible — not because the tools weren’t there, but because nobody had told the assistant to look for them. This is not a story about a bad AI, AI is great when it works as expected. This is a story about an unconfigured one. And the difference matters enormously, because the tools were there the whole time. The credential was in the vault. The API was documented. The VPS was one SSH call away. The assistant knew all of this, in the same way you know where your keys are even when you’re looking for them in the wrong pocket. It just needed to be told to check the other pockets. That’s what a skill file does. And most of them aren’t doing it. Part 2: Why AI and Humans Hit Different WallsTo understand why this happens — and why skill files fix it — you need to understand a fundamental mismatch between how humans and AI systems process problems. Edward de Bono, the psychologist who coined the term lateral thinking in his 1970 book Lateral Thinking: Creativity Step by Step, identified the core issue decades before large language models existed. His observation was this: “The difficulty of thinking in alternatives is not a lack of intelligence — it is a conditioned habit of following the most obvious path.” He was talking about humans. But it describes AI default behaviour almost perfectly. How humans actually solve problems When a human engineer hits a wall — say, no direct access to a service — they don’t stop. They activate what cognitive psychologists call associative reasoning: a non-linear web of memory, analogy, intuition, and past experience that fires simultaneously, not sequentially. Daniel Kahneman, in Thinking, Fast and Slow, describes two parallel systems at work: System 1 (fast, instinctive, associative) and System 2 (slow, deliberate, logical). When a human faces a blocked path, System 1 immediately pattern-matches against thousands of similar situations — “this is like the time we couldn’t access the AWS console and used the CLI instead” — while System 2 reasons through the alternatives System 1 surfaces. The result is what we’d call lateral thinking: the engineer doesn’t just try the next step in the sequence. They jump domains. They reframe. They ask “what if I approached this from the other side?” How AI systems actually process problems AI language models — regardless of how sophisticated they are — are fundamentally sequential processors. Each token is generated by attending to what came before and predicting what comes next. This makes them extraordinarily good at completing patterns, following chains of reasoning, and executing known procedures. It makes them structurally weak at one specific thing: generating alternatives when the primary path fails. When an LLM hits a wall — no direct tool match, no obvious next step — it doesn’t activate a web of analogies and past experience. It completes the pattern in front of it. And the pattern in front of it, when no tool matches a task, is: report that you can’t do the task. The diagram below shows this divergence visually. Human problem-solving radiates outward from the problem in all directions simultaneously — memory, intuition, analogy, emotional resonance, reframing — with cross-links between nodes that generate unexpected solutions. AI default reasoning moves linearly: read prompt → check tools → no match → report failure. Press enter or click to view image in full size The AI isn’t less intelligent. It’s differently structured. And that structure has a specific failure mode: it will execute any explicit procedure brilliantly, and stall at any gap in the procedure. This is precisely why Gary Klein, in Sources of Power: How People Make Decisions, found that expert humans rarely follow decision trees when working under pressure. Instead they use recognition-primed decision making — pattern recognition that triggers the first workable option, then mental simulation to check it, then adaptation. It’s messy, non-linear, and extraordinarily effective. The skill file is how you give an AI the scaffolding for that same behaviour. You can’t give it System 1 instincts. But you can give it an explicit checklist that mimics the outputs of lateral thinking — try the vault, try the VPS, try the hop, try the reframe — and that checklist fires where the instincts would have. It’s not the same as human reasoning. But at 4:49 PM on a Tuesday when your homepage has a giant icon svg logo css config issue on it, it’s close enough. Part 2b: What a Skill File Actually IsMost developers treat skill files like a README. Drop in some project context, list your tech stack, maybe add a note about preferred formatting. This is approximately as useful as handing a surgeon a Post-it note that says “patient has two arms.” A skill file isn’t documentation. It’s a cognitive protocol. It’s the difference between an assistant that hits a wall and one that walks around it. Here’s what a minimal skill file looks like in the wild: And technically, it’s right. There’s no Cloudflare MCP server connected. No dashboard access. No magic portal. But there is a credential vault with a Cloudflare API token. There is a VPS that can make curl requests to the Cloudflare API. There is a Tailscale connection to the dev machine where the CF CLI lives. There are three paths to the destination — and the assistant found zero of them, because nobody told it to look. This is the core failure mode of AI assistant configuration. We tell the assistant what the project is. We never tell it how to think when things go wrong. Lateral thinking — in the de Bono sense, the deliberate departure from the obvious path — doesn’t emerge naturally from language models. It has to be instructed. Explicitly. In the skill file. Download the Medium appAnd the good news is: it’s not complicated. Part 3: The Configuration That Changes EverythingHere’s what we added to the skill file after the incident. Read it like a protocol, not a prompt: When hitting a wall, run this chain SILENTLY before responding.Never announce it — just execute and present options or startthe best path immediately.Auto-resolution chain (run in order, stop at first hit): The chain is ordered. The assistant doesn’t randomly try things. It walks a priority queue: local knowledge first, credentials second, infrastructure third, external search fourth, creative reframe last. This matters because it mirrors how a competent engineer actually debugs. You check what you know before you reach for a browser. It runs silently. The instruction says silently. This is not an accident. An assistant that narrates its own diagnostic process is an assistant burning your attention on process instead of outcome. The chain is invisible machinery. The output is a solution. It ends with reframe. This is the step most configurations miss entirely. If every tool in the toolkit fails — if the API is down, the credentials are wrong, the VPS is unreachable — the protocol doesn’t report failure. It asks a different question: what’s the non-obvious path? Can we achieve the same outcome by approaching the problem from the other side? In the Cloudflare case: if the API token had been wrong, the reframe might have been “can we modify the nginx config to bypass the block at the server level?” Different path. Same destination. The credential map is in the file. Not in your head. In the file. This table is worth more than any amount of system prompt engineering. It converts “I can’t find the credentials” into “I found CF_API_TOKEN, calling the API now.” The assistant doesn’t need to guess. It has a map. The result of adding these four things to our skill file was immediate and measurable. The next time we hit a blocked page — Google Search Console reporting 403 errors across six core pages, Googlebot blocked for the third time — the diagnostic went like this: Check VPS nginx logs → Googlebot getting 200s, not 403sTherefore the block is happening at Cloudflare levelRetrieve CF_API_TOKEN from credential vaultQuery Cloudflare API from VPS via curlFind: security level set to high, browser integrity check onPatch both settings via APIVerify with live curl testsNo walls. No “I can’t access Cloudflare.” Just a chain of steps that ended with the problem solved. The browser integrity check, for the record, is a JavaScript challenge that Cloudflare serves to unrecognised visitors. Googlebot — and every other legitimate crawler — cannot execute JavaScript challenges. With it turned on, every Googlebot visit returned a 403. With it off and security level at medium, crawlers pass through and the bad actors still hit your explicit firewall rules. A two-line API fix. Found in under four minutes. Because the skill file told the assistant to look. Part 4: Twenty Things Your Skill File Should KnowThe Cloudflare example is about tool access. But lateral thinking in a skill file goes deeper than credentials and API chains. Here’s the broader list of what belongs in a properly configured skill file — not just for debugging, but for the full surface area of how an AI assistant fails to think. Your assistant needs to know every path into your infrastructure. Not just the obvious one. VPS SSH, yes. But also: API tokens for every service you use, Tailscale IPs for every machine in your network, alternative endpoints when primary ones fail. The credential map isn’t optional — it’s the difference between a dead end and a detour. On decisions already made: Half the time an AI assistant suggests the wrong solution, it’s because it doesn’t know the right one was already tried and rejected. Put your settled decisions in the skill file. “We chose Postgres over MongoDB — final.” “REST, not GraphQL — not up for debate.” This isn’t rigidity. It’s preventing the assistant from walking you backward through arguments you already won. On how you want to be interrupted: The default behaviour of most AI assistants is to ask before acting. This is safe. It’s also slow. Your skill file should specify when the assistant should just go: “Pick the most likely path and start. Don’t ask permission unless genuinely ambiguous.” And equally, when it should stop and check: “If the fix creates technical debt, flag it before executing.” On your stack, your conventions, your vocabulary: Industry terminology, internal project codenames, file naming conventions, branch strategy, error handling patterns. An assistant that doesn’t know your project calls things by the wrong names, proposes solutions for a stack you don’t use, and asks questions you shouldn’t have to answer. On the session lifecycle: A skill file should include session open and session close protocols. On open: recall the last session’s handover note, check system health, surface any pending items. On close: write a consolidated memory note covering what changed, what’s pending, and any config modifications. Without this, every session starts blind. With it, every session starts with context. On what the assistant should never say: “I can’t.” “I’m unable to.” “I don’t have access to.” These phrases should be absent from a properly configured assistant. Not because the limitations don’t exist — they do — but because the response to a limitation is always a path, never a wall. The Skill File Is the ProductHere’s the thing nobody tells you about AI-assisted development. The model is a commodity. GPT-4o, Claude Sonnet, Gemini — at the level of general capability, they’re roughly interchangeable for most tasks. What’s not interchangeable is the configuration layer wrapped around them. The skill file is that configuration layer. And most people treat it like an afterthought. The developers getting the most out of AI assistants right now aren’t the ones with the best prompts. They’re the ones who have invested in the infrastructure around the model: credential vaults, session memory, lateral thinking protocols, credential maps, decision logs. The cognitive scaffolding that turns a capable model into a reliable teammate. The Googlebot 403s got resolved. Not because the model got smarter — because the skill file got better. If your AI assistant says “I can’t” more than once a week, that’s not a model problem. That’s a configuration problem. And configuration problems have solutions. Tools That HelpThe VEKTOR downloads page has two free resources worth grabbing regardless of whether you use VEKTOR’s memory system: VEKTOR Memory Skill — (scroll down page) a drop-in SKILL.md for Claude Code, Cowork, Cursor, Cline, and Roo. Includes auto-briefing on session start, smart recall routing, and memory checkpointing. Free, no licence required, drop it in .claude/skills/ and it auto-loads. Personal Harness Template — (scroll down page) a pre-wired skill template with session rules, memory namespaces, approval gates, and 20 fill-in slots for your own context. Both files are designed around the same principle this article is: your assistant should never hit a wall it can’t route around. The templates give you the scaffolding. The credential map, the decision log, the lateral thinking chain — you add those once, and they compound across every session you run. Start personalising to your configuration by copying the ideas above back into your llm with 2 files given. And start living in the future. VEKTOR Memory is a local-first AI agent memory system. Persistent, sovereign, sub-1ms recall. vektormemory.com Follow @vektormemory on Medium for more on agent architecture, memory systems, and the infrastructure layer nobody talks about. Developer Tools · LLM · Claude · Cursor · Agentic AI · MCP · Context Management · Node.js Generative Ai Tools Ai Infrastructure Agentic Ai Open Source AIAgentic WorkflowClaude CodeSkills Development Templates let you quickly answer FAQs or store snippets for re-use. as well , this person and/or - Stack: Node.js, SQLite, nginx- VPS: [host stored in vault]- SSH key: stored in credential vaultUseful. Fine. But watch what happens when things go wrong. The assistant needs to check a Cloudflare firewall rule. It doesn’t see a Cloudflare tool in its toolkit. It reports back: “I can’t access Cloudflare directly.” - Skill file — is the answer already documented here?- cloak_passport — try likely key names: exact service name,service-key, service-api, service-token, SERVICE_API_TOKEN- VPS curl — run the API call from the server itself- Tailscale hop → dev machine — reach local tools not on VPS- vektor_recall — search memory for prior solutions- web_fetch / web_search — find API docs, workarounds- Reframe — can we replace X? redirect X? override X upstream?Response format — paths not walls:❌ "I can't access Cloudflare directly"✅ "Reaching this via CF API token from vault — going now."Default: pick the most likely path and START.Don't ask permission unless genuinely ambiguous.Four things make this work. Not three. Not five. Four.