Open Source Antigravity And Gemini3 Coding Test
Testing AI coding assistants with real-world tasks: ConnectOnion agent framework migration and frontend development
I've been coding for 5 hours this morning using Antigravity and Gemini3, and here's my conclusion.
First of all, here's my background for your reference:
Now, let's dive into the details of my coding experience over the past 5 hours.
OAuth is complicated, and last week I used Claude Code but it didn't work, so I'm going to use Antigravity with Gemini3 to make it work.
First of all, I'm a $400 user, but it still didn't allocate me enough tokens, which is annoying. We have Gemini 3 Pro High and Gemini 3 Pro Low - both of them only worked for maybe 2 hours before saying I reached the limit.
As you can see, it fixed the problem and also wrote some tests, but when I reviewed the code, I found it still has the problem of over-engineering with lots of unnecessary try-catch blocks.
For the second test, I challenged Antigravity with Gemini3 to work on a complex frontend task: updating 15 different pages at the same time. This is typically where AI coding assistants struggle because they need to maintain context across multiple files and ensure consistency.
The task involved refactoring a large web application's UI components, updating routing logic, and ensuring all pages maintained consistent styling and functionality. Here's what I observed:
Overall Assessment: For large-scale frontend refactoring, Gemini3 performed better than Claude in my experience. It's more persistent and doesn't give up on long tasks. However, you need to watch for over-engineering and be ready to ask for simpler solutions.
Source: Dev.to