Gaming: Software Engineer Creates Classic Simcity-style Map Of Nyc—and...
I enjoy a good traced ray, AI-upscaled texture, or 'hallucinated' frame as much as the next person, but I think pixel art remains an important part of any art team's tool kit. Pixel art offers far more than a nostalgic throwback appeal, capable of a level of expression that would give even the most photorealistic polygons a run for their money. So when I stumbled across the Isometric NYC city map project, I was at first properly stoked—and then deeply conflicted.
Isometric NYC is the one-man effort of Andy Coenen, attempting to render all of New York City in an isometric pixel art style distinctly reminiscent of the SimCity series. But rather than laboriously building this city pixel-by-pixel, Coenen has instead leveraged multiple AI agents to create this city map. The entire process required much more work than dashing off a handful of prompts, but you are perhaps beginning to understand my sense of conflict.
Coenen himself is a software engineer currently conducting AI research at Google DeepMind. He explains that the original idea for Isometric NYC was to use Google's "Nano Banana to generate a pixel art map from satellite imagery tile-by-tile," though Coenen ultimately used a number of AI agents, including "Claude Code, Gemini CLI, and Cursor." He also shares that he "wound up writing almost no code for this project."
He first set to work by 'whiteboxing' city geometry using 3D CityGML data. Unfortunately, his original plan didn't work out because "there was a bit too much inconsistency between the 'whitebox' geometry and the top-down satellite imagery, and Nano Banana was prone to too much hallucination in resolving these differences."
Moving to Google Maps 3D tiles API's more precisely aligned geometry and texturing, Coenen encountered further issues with Nano Banana. Being such a large generative AI model, Nano Banana struggled to output a stylistically consistent city, and ultimately proved both too slow and too pricey for the project's intended scope. So, Coenen elected to train something smaller and cheaper.
By feeding a Qwen/Image-Edit model "a training dataset of ~40 input/output pairs," Coenen began to get AI-generated city tiles closer to his preferred style in a fine-tuning process that took "~4 hours and cost ~12 bucks." Many more weeks of work followed to create software that would allow you to zoom in and out of each tile without completely hammering your hardware.
At a glance, the final result is pretty breathtaking—even that billi
Source: PC Gamer