Google launches Gemini Live: Real-time camera and screen sharing lets AI see what you see on Android

Less than a week after opening up Gemini 2.5 Pro Canvas to the public, Google is back with Gemini Live—a new feature that lets users share their screen or camera feed with its AI assistant in real time.
Rolling out as of April 7, Gemini Live is now available on Pixel 9 and Galaxy S25 series devices, and it’s open to Gemini Advanced subscribers on Android. It’s the company’s most direct move yet to bring visual, real-time AI interaction to smartphones.
“
It’s here: ask Gemini about anything you see. Share your screen or camera in Gemini Live to brainstorm, troubleshoot, and more. Rolling out to Pixel 9 and Samsung Galaxy S25 devices today and available for all Advanced users on @Android in the Gemini app,” Google shared on X.
It’s here: ask Gemini about anything you see. Share your screen or camera in Gemini Live to brainstorm, troubleshoot, and more.
Rolling out to Pixel 9 and Samsung Galaxy S25 devices today and available for all Advanced users on @Android in the Gemini app:… pic.twitter.com/fjTD4qhvjz
— Google Gemini App (@GeminiApp) April 7, 2025
Gemini Live Making AI More Visual
First teased during Project Astra at I/O 2024, Gemini Live changes how people interact with AI. Instead of relying on typed questions or static photos, users can now show the AI what they’re seeing—live. Point your camera at a meal, a gadget, or your surroundings, and Gemini can respond with relevant info. Or share your screen during web browsing, writing, or coding, and it can react in real time.
According to Google’s own post, the idea is to let users “talk through ideas, learn about your environment, or get help with what’s on your screen.” That opens up use cases for everyday scenarios—looking up a recipe from your dinner plate, getting feedback on a document, or asking for help debugging code you’re working on.
Who Gets It First?
The rollout kicked off with Google’s Pixel 9 and Samsung’s Galaxy S25 series. If you’re using either device, you already have access through the Gemini app without any extra cost. For everyone else on Android, Gemini Live is available through a Gemini Advanced subscription, part of Google’s $19.99/month AI Premium plan under Google One. You’ll need Android 10 or later to use it.
The rollout hasn’t hit everyone at once, though. Several users had to force-close the Gemini app to trigger the update, according to reports from 9to5Google and Android Authority. So if it’s not showing up yet, that may help.
The subscription requirement has stirred mixed reactions. Some users praised the new feature—@ai_for_success posted on X, “This is awesome”—but others questioned why a visual AI assistant should live behind a paywall when it’s being promoted as the next big thing.
How It Works
On supported devices, holding the power button launches Gemini. From there, users can tap either the camera icon or “Share screen with Live.” Android will confirm the screen sharing (currently no single-app option), and once started, a persistent notification keeps the session visible. The camera mode streams what you’re seeing directly into the AI assistant.
CNET called the update a major shift from static images to more fluid, real-time interactions. For example, pointing the camera at a Jigglypuff toy led Gemini to identify it and offer extra context, according to Droid Life. Other users shared their screens to get help with shopping lists, coding tasks, and online content reviews—getting near-instant feedback from the assistant.
From Astra to Android
This feature is powered by Project Astra and Google’s Gemini 2.0 model, which is also rumored to be a key part of future AR experiences. While we’re still waiting to see what that looks like—especially with rumors of smart glasses—Gemini Live is clearly a step in that direction.
OpenAI’s ChatGPT introduced a similar camera-based tool last year, but Gemini has a different advantage: tighter integration into Android’s ecosystem. That could make it feel more like part of the phone, not just another app.
What People Are Saying
Reactions online have been enthusiastic. Tech reviewers and early users are sharing how the assistant feels more “present” during interactions—especially when dealing with real-world objects or visual content. It also supports over 45 languages, helping it respond more naturally across different regions.
That said, the price tag for full access is still a sticking point. The subscription model might keep some users from trying the feature, at least for now.
Looking Ahead
Gemini Live brings a new dimension to how people use AI on their phones. Whether you’re troubleshooting, learning something new, or just looking something up on the go, the ability to show your AI what you’re dealing with could be a turning point in how these assistants fit into daily life.
For now, it’s Android-only and rolling out gradually. But if you’re a Pixel 9 or Galaxy S25 user—or paying for Gemini Advanced—it’s already in your hands.
Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured