9 AI Demos, One Week in Vegas: Our Work at Google Cloud Next '26

We just got back from Google Cloud Next '26 in Las Vegas, and we're still processing it all. Our team built 9 interactive demos and experiences across the Google Cloud Showcase floor, bringing some of the biggest announcements of the week to life for the tens of thousands of attendees who walked through.

This year's event was all about AI agents, generative media, and the science behind it all. Our job? Make those ideas feel real. Tangible. Something you could walk up to and actually use.

We partnered with GPJ on installation, who crafted and built the physical environments for everything you'll see below. On our end, we handled concept development, design, front-end and back-end development, and integration across the board.

Here's what we built.

The Three AI Hero Demos

These were the centerpiece experiences on the Showcase floor, each one built to put a major Google Cloud capability directly in people's hands.

AI science explorer

AI Science Explorer

The tagline says it all: "Discover the scale of Google's scientific research through food."

Attendees started by picking up one of 10 physical cubes, each containing a real crop specimen: tomatoes, corn, coffee beans, grapes, cotton, and more. Place the cube on a scanner, and the experience comes alive. Using a zoom feature, visitors could travel from planetary-scale satellite imagery all the way down to molecular biology and the subatomic world, following their crop's journey through each layer of scientific research.

The whole thing is powered by AlphaEarth Foundations, part of Google's Earth AI initiative. The model turns raw satellite data into a continuous view of the planet's evolution, identifying everything from crop types to land-use changes across the globe. In the demo, you could see land similarity maps, track specific growers (like tomato farmers in Japan), and understand how DeepMind's science breakthroughs connect to the food on your table.

Our team designed and developed the full interactive experience, from the 3D globe interface to the zoom transitions between scales. The challenge was making genuinely complex science feel intuitive enough that someone could walk up and start exploring without instructions.

AI agent challenge

AI Agent Challenge

With the Gemini Enterprise Agent Platform as the headline announcement of the week, we built a mini-game that made the concept tangible. Players worked through a series of city-based challenges, each one powered by an Orchestrator Agent built on Google's Agent Development Kit (ADK) and deployed on Cloud Run.

Here's where it got interesting. The Orchestrator Agent decided which sub-agents to deploy based on the nature of each challenge, connecting to real enterprise data sources like BigQuery, Google Drive, Jira, Confluence, and Box via Gemini Enterprise. As players progressed, the agent architecture scaled up: from a single specialized agent, to an orchestrator managing local sub-agents, to an autonomous team that self-organized and created its own workflow based on business logic.

The processed data fed back into physical puzzle mechanisms that players used to complete each challenge. It was a playful way to show what "agentic AI" actually feels like when you're interacting with it, and how agents can coordinate across enterprise systems in real time.

Ai design workshop

AI Design Workshop

This was probably the one that stopped people in their tracks the longest. The experience walked attendees through a complete product design pipeline, from first idea to go-to-market kit, with AI generating the interface itself along the way.

It worked in four stages. First, Gemini generated an initial product image and simultaneously built a custom UI tailored to that specific product's physical attributes, surfacing only the relevant design controls (form, fluidity, definition, cutout style). Then Nano Banana took over for dynamic refinement, processing visual and textual instructions to regenerate specific design edits, while deconstructing the product into exploded views, knolling layouts, and market analysis.

From there, Nano Banana staged the product in photorealistic room environments with day and night modes, maintaining consistent 3D geometry across scenes. Finally, Gemini compiled a comprehensive go-to-market kit: market analysis, bill of materials, technical schematics, all finalized with a cinematic Veo video to kickstart production.

For designers and product teams, this was a concrete preview of what generative interfaces mean for creative workflows. The UI wasn't pre-built. It was constructed by AI in response to what you were making.

Next in Retail

Next In: Industry Demos

We also built a series of "Next In" demos, each one imagining how AI transforms a specific industry.

Next In Health was a futuristic physical therapy tool, showing how AI could guide rehabilitation with real-time feedback and personalized treatment plans.

Next In Retail let visitors build a custom travel wardrobe tailored to their destination and weather, with AI-generated outfit recommendations they could even try on virtually using Nano Banana.

Next In Media was the showstopper of this group. Attendees could take on roles in a media production crew and create a custom short film with their friends, from script to stunning visuals, custom audio, and smooth transitions, all powered by Google's generative media models.

What Made This Year Special

This year the scope grew, but the approach stayed the same: make the technology feel human, put it in people's hands, and watch what happens.

What struck us most was the range of conversations these experiences sparked. Business leaders talking about agents. Designers rethinking interfaces. Developers prototyping on the spot. The best demos don't just show what's possible, they get people thinking about what they'd build next.

We're grateful to the Google Cloud team for the continued partnership, and to GPJ for making these installations look and feel incredible. Already thinking about next year.

hand drawing