Tools
Tools: 3D-First Portfolio Built with Antigravity
2026-01-31
0 views
admin
About Me ## Portfolio ## How I Built It ## Tech Stack ## Using Gemini 3 ## Antigravity for Rapid Iteration ## The 3D Keyboard System ## Designing the Portfolio Around a 3D Model ## What I'm Most Proud Of ## Turning a 3D Object Into a Narrative Device ## Complex Animation, Controlled Carefully ## AI as an Enabler, Not a Gimmick ## Conclusion New Year, New You Portfolio Challenge Submission This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI I’m Aayush Bisht, a MERN stack developer with a strong interest in building interactive, visually rich web experiences that combine solid engineering with thoughtful design. I enjoy working at the intersection of frontend performance, animation, and system architecture where small details like motion, timing, and interaction can completely change how a product feels. For this project, I wanted to move beyond a traditional portfolio and explore how modern tools like Gemini 3 and Google Antigravity can support creative and technical decision-making throughout the development process. Instead of using AI only to generate code, I used it to reason about structure, animation flow, and interaction design.
This portfolio reflects how I like to build: ambitious ideas, broken down thoughtfully, executed with care and refined through iteration. This portfolio relies heavily on scroll-driven animation, 3D interaction, and viewport-aware transitions.
For that reason, the best way to experience it is in a full browser window. Live Portfolio:
https://portfolio-v2-709587781727.us-central1.run.app/ --labels dev-tutorial=devnewyear2026 This project is a Next.js 14 interactive 3D portfolio, built around a single idea: Use motion, transformation, and spatial interaction to guide the viewer through a story. Gemini 3 played a key role throughout the development process — not by generating finished code, but by helping reason about complex systems. Many of the architectural decisions especially around animation sequencing and state-based transformations were refined through iterative conversations with Gemini. While Gemini helped shape ideas, Antigravity made it possible to execute them quickly. Antigravity accelerated: This allowed the project to evolve organically without losing structure. At the heart of the portfolio is a Spline-designed mechanical keyboard, used as a dynamic visual metaphor. The keyboard transforms through multiple states as the user scrolls: Each keycap is mapped to a real skill and responds to hover and press events, updating live text inside the 3D scene. All transformations are orchestrated through GSAP timelines tied directly to scroll position. At the core of this portfolio is a single 3D object: a mechanical keyboard designed in Spline. Rather than treating the 3D model as a background visual or decorative element, the entire website was designed around it. The layout, section flow, scroll behavior, and interactions were all shaped to support how the keyboard transforms and reacts over time. Gemini played a key role in reasoning through this model first approach. Using Gemini, I explored questions like: These conversations helped define a system where each section corresponds to a distinct keyboard state, with its own transformation logic and interaction rules. The result is a portfolio where the 3D model isn’t an add-on it’s the structural backbone of the experience. Content and motion exist to complement the model, not compete with it. Gemini made it possible to think through this complexity before implementation, allowing the final system to feel intentional rather than experimental. Instead of using 3D as decoration, the keyboard acts as a storytelling element reacting, transforming, and evolving alongside the content. GSAP handles precision choreography.
Framer Motion handles UI-level interactions.
Spline handles spatial and interactive depth. Each tool is used where it excels. There is no visible “AI feature” in the UI. Yet AI influenced nearly every decision: Gemini and Antigravity didn’t replace creativity they amplified it. This project is not about building a flashy portfolio.
It’s about exploring what becomes possible when modern frontend engineering, 3D interaction, and AI-assisted reasoning come together.
Gemini 3 and Antigravity made it possible to think bigger, iterate faster, and refine ideas more confidently. And that’s the most exciting part. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - Next.js 14 (App Router)
- React 18 + TypeScript
- Tailwind CSS
- GSAP + ScrollTrigger
- Framer Motion
- Spline (3D design & runtime)
- Lenis (smooth scrolling)
- Radix UI + custom UI components
- Deployed on Google Cloud Run - Explore different ways of mapping scroll position to 3D transformations
- Reason about how a single 3D object could represent multiple sections of a portfolio
- Evaluate animation pacing and transitions between sections
- Think through interaction patterns between React state, GSAP timelines, and Spline objects
- Stress-test ideas before committing to implementation - Refactoring large animation orchestrators
- Iterating on scroll-triggered logic without breaking behavior
- Maintaining consistency across many interactive components
- Experimenting with alternate animation flows safely - Hero — subtle rotation and motion to establish presence
- Skills — interactive keycaps representing technologies
- Projects — dramatic rotation with animated elements
- Contact — a teardown animation where keycaps break apart - How can a single 3D object represent multiple sections of a portfolio?
- What transformations feel natural when transitioning between skills, projects, and contact?
- How should spatial changes (scale, rotation, position) map to scroll progression?
- Where should interaction live in the UI, the 3D scene, or both? - How animations flow
- How interactions feel
- How complexity stays manageable
how-totutorialguidedev.toairouter