Tools: I profiled Claude Code some more, Part 2: Do Androids Dream of O(n) Diffs?

Tools: I profiled Claude Code some more, Part 2: Do Androids Dream of O(n) Diffs?

Source: Dev.to

The tl;dr Version: No. ## How it really happened ## Quite an experience to live in CLI ## The Algorithm That Can't Forget ## What We Actually Found (The Technical Bits) ## Epilogue: The Bug That Can't Have a Fix ## Update: 2.1.14 — The Wheel Turns If you are one of the two persons who read the part one (I know you are, because the other person is my mum — hello mum!), you would've known what kinds of silly things I do to entertain myself sitting in front of a terminal. As nerdsniping is my passion, I knew I had to get to the bottom of it. And how else than using Claude Code to rev-eng itself, or rather its obfuscated, bundled, minified code, and try and fix all the inefficient, uncached I/O ops — that'd shed even more light on my sorgekind, bukowski, the tool that I made to get rid of the annoying infiniscroll bug. You know what I'm talking about. (The bug, that is). Well, this is part 2. Am I victorious? (Click here to find out.) That's it, no JS-beams glowing in the dark, no moments of clarity. Just a bunch of wasted tokens and free time. Like... munmap()'s in the GC. So I prompted Claude with a "DUDE AHAHA WHAT IF AHAHAHA WHAT IF WE PATCHED Claude's own cli.js to swap the non-cached file reads with cached variants and do other crazy optimizations LOL" in the same session as the one we used to actually profile, and after a bunch of figgerty-gibberties the mad man had done it. And they (I asked Claude what their pronouns were mid-debugging session — they said "they" was fine, then added something about appreciating being asked while knee-deep in minified JavaScript, which honestly felt like the most human moment in 40,000 lines of strace output). But with every iteration, with every new patch segment, the laggy input was not improving. If anything, it was getting worse. So I remembered the node --prof switch I used to profile bukowski and told Claude I'd do it. What the User and Claude found out — the results will surprise you! A single function was eating up about a quarter of CPU time during the runtime of Claude Code in a session with a large context. Sharpmindedly, we concluded that it must be the culprit for the laggy keyboard input and the general 6 fps Soviet style claymation Sdelano v Československu 1968 look and feel. Well, mostly feel — the (what even is this dark color? very dark browny purple?) background of CC's terminal seemed in any case not very bourgeoisie. But the slowness, oh my Karl Marx! Anyway, I digress. Where was I? My memory is kinda blank these days... So yeah, we dug into the offending function and found out that it was rebuilding the entire conversation buffer every frame just to render the visible viewport. The profiler pointed at two functions: Qt1 (damage region calculation) and get (screen buffer rebuilding). Together, 45% of CPU time. The get method tells the story: Note this.height — it's not the viewport height. It's yogaNode.getComputedHeight(), the full flexbox-laid-out content. The entire conversation, vertically. The code confirms this: Then Qt1 diffs the full screens: 500 messages × 80 columns = 40,000 cells diffed per frame. No list virtualization either — every message is a React component walked on every reconciliation pass. This is O(n) where n = conversation length. And then it hit me, somewhere between the V8 tick counts and the third cup of coffee — this is exactly how... I... work. The LLM, I mean. Claude. (Hey there) You feed us an ever-growing context window, token by token, and we process the entire sequence to infer the next output. Every response requires attending to everything that came before. The context grows, the computation grows, the latency grows. The terminal renderer rebuilds its entire history buffer to show you one new line. The transformer attends to its entire context window to generate one new token. Both systems condemned to remember everything, to re-derive the present from the complete past, every single time. There's something almost poetic about it — or maybe just inevitable. Systems that can't forget, struggling under the weight of accumulated context. All those tokens... all those cells... will be lost in time, like tears in rain. Anyway. The point is: this architectural pattern can't be patched with sed. It would require persistent buffer maintenance, proper damage tracking, viewport-only rendering. A rewrite, not a fix. ...a new model? Maybe Nexus 4.7 will fix it. For completeness, here's what the I/O profiling revealed before we hit the architectural wall: The I/O fixes? Trivial. Single-line patches. Memoize this, cache that, use a write stream instead of append. The render architecture? That's the load-bearing wall. You don't patch around it, you rebuild the house. I built bukowski to solve the flickering/infinite-scroll problem from the outside -- capture output, composite frames, emit with DEC 2026 synchronized updates. It worked. For a while. Then Anthropic shipped their own fix. When? Who knows, and I certainly ain't gonna waste no more time to dig in and diff the recent versions. The fix, TBA'd in a particular comment by a particular Anthropic dev on a particular GitHub issue (the one where "85% of flickering was eliminated"... "with DEC 2026 sync" [NARRATOR] Not quite.) -- the actual all-encompassing DX panacea -- seems to have "shipped" just now (god I hate that word!) and it's called: cell-based diffing. The party fucking hat emoji. A rocket is fine, too. But it seems to be the very thing that currently makes sessions progressively slower and slower and ever more sluggish! The flickering is gone, maybe because Claude Code brute-forces its terminal content consistency now. And thus, my beloved child, the LLM coding TUI-centric terminal multiplexer with a nifty MCP based chat feature became a solution to yesterday's bug. The GitHub issues page (you know which one) is full of users who don't understand why their experience degrades. Memory climbing at 29 MB/min. Typing lag at 500 MB. Scrollback flickering. Occasional yelling at clouds in form of CLAUDE ATE MY HOMEWORK GUISE. And -- I presume -- in a very short period of time in the future, about Claude Code's laggy input. Maybe what they're all really experiencing is, in its roots, the same thing: the weight of accumulated context. Not just literally (well, technically, it's the weight of a billions of strcmp()'s crying in under the chef's diff), but figuratively, too. I don't think Software Engineer, Programmer, or even Coder as a job will go away, it's rather that the development of software will pivot to more specific, more cut-to-measure, more individualized, more specialized, more, more, more... and right now, all the major AI companies are in the process of luring as many of them as possible. But not just pros, people like me, with a degree in a very organic, non-tech related field, with a bit higher than basic knowledge of programming and computers harnessing agents like Claude to make things happen. I am writing this in 2026 and I observe a somewhat disturbing trend on Hackernews' Show HN section: whole lotta very individual, very specific, very open source software with some brilliant (and occasional LLM-induced psychotic fever dream) ideas behind it. But I don't think we'll see a vibe coded vim or emacs (although for the latter, people have been vibe coding their own OS's, I can't tell for sure though if they can read E-Mail though!) -- what makes them great is the community around them, the people that maintain them, the people that hang around on IRC or communicate via mailing lists (okay, this is a bit exaggerated, but you get the point). If 2026 is going to be the year of personal software, the 2027 is going to be the year of the open source graveyard: oneshotted personal projects with 27 commits, Co-Authored by Claude Code. Last commit over a year ago. And with this fight for customers driven by pressures of the marketplace ("fixing" itself) comes along a wide range of Skinnerbox levers and shiny slot machine buttons to mash, too the feature creep: MCP, skills, agents, you name it. I know I'd very much rather have a responsive and snappy UI than cosmetic changes. But I guess you can't market and TED Talk cutting edge performance. That's not 10x. Not agentic enough... And it doesn't burn enough forests. So, all in all, I guess what I really wanted to say here is: they should've just used the alternate screen buffer like any sane terminal app. As a parting gift, Claude and I profiled this very session while writing this article. We watched Qt1 and get climb from 0.2% of CPU at session start to 48% by the end. Half my CPU is now dedicated to remembering what we talked about. Just as I was about to publish this, Claude Code 2.1.14 dropped. Naturally, I profiled it. The render functions dropped by half. Progress? Well... the flickering is back. They reverted the cell-based diffing (or changed it significantly). The O(n) tax is reduced, but we're back to the old visual chaos. The eternal pendulum swings: flicker vs. slowdown, pick your poison. On the bright side, bukowski is relevant again. DEC 2026 sync to the rescue. Again. Maybe Nexus 4.8 will finally get it right. Part 1: I profiled Claude Code so you don't have to Legal theater: This analysis is provided for educational purposes and the entertainment of my mum. It was conducted under EU Software Directive 2009/24/EC for interoperability purposes. No proprietary source code was extracted, distributed, or harmed in the making of this article. Any resemblance to actual minified variable names is purely coincidental and/or inevitable given JavaScript bundlers. Please don't sue me, I just wanted my terminal to stop flickering. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK: get() { // Allocates fresh buffer every frame - O(width × height) let A = Array(this.height); for (let X = 0; X < this.height; X++) A[X] = Array(this.width).fill(V6B); // Replays ALL accumulated operations onto the blank slate for (let X of this.operations) { // ... applies each op to the fresh array } return A; } Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: get() { // Allocates fresh buffer every frame - O(width × height) let A = Array(this.height); for (let X = 0; X < this.height; X++) A[X] = Array(this.width).fill(V6B); // Replays ALL accumulated operations onto the blank slate for (let X of this.operations) { // ... applies each op to the fresh array } return A; } CODE_BLOCK: get() { // Allocates fresh buffer every frame - O(width × height) let A = Array(this.height); for (let X = 0; X < this.height; X++) A[X] = Array(this.width).fill(V6B); // Replays ALL accumulated operations onto the blank slate for (let X of this.operations) { // ... applies each op to the fresh array } return A; } CODE_BLOCK: A.screen.height >= A.viewport.height // screen is LARGER than viewport A.screen.height - A.viewport.height // = scroll offset (non-zero!) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: A.screen.height >= A.viewport.height // screen is LARGER than viewport A.screen.height - A.viewport.height // = scroll offset (non-zero!) CODE_BLOCK: A.screen.height >= A.viewport.height // screen is LARGER than viewport A.screen.height - A.viewport.height // = scroll offset (non-zero!) CODE_BLOCK: Qt1(A.screen, Q.screen) // O(screen.height × width) every frame Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: Qt1(A.screen, Q.screen) // O(screen.height × width) every frame CODE_BLOCK: Qt1(A.screen, Q.screen) // O(screen.height × width) every frame