Google’s new update to Gemini Live enables the assistant to analyse smartphone screens and camera feeds in real-time, company spokesperson Alex Joseph confirmed to TheVerge. The update was first teased with Project Astra (Google’s camera-based AI tech) nearly a year ago. It allows users to take the AI agent into the real world to answer questions based on what the user shows it.
Gemini gains sight
Users bring up the Gemini overlay on their phone and look “Share screen with Live” or “Ask about screen” to use the new feature. Users can also enter the Gemini Live interface, start a video stream, and toggle the AI between the front and back cameras.
The feature was first spotted by a Reddit user on their Xiaomi smartphone, according to 9to5Google. That same user shared a new video today, demonstrating Gemini’s new screenreading function.
A short demo of Project Astra (Share screen with Live)
byu/Kien_PS inBard
This is one of two Astra features Google announced earlier in March for Gemini Advanced subscribers under the $20 per month Google One AI Premium plan. The second feature, live video analysis, allows the AI to interpret smartphone camera feeds and respond to questions—demonstrated in a Google video where a user asks for pottery paint color recommendations. Google says it’ll be available generally on all Android devices. For some reason (probably one that rhymes with ‘honey’), the feature is first rolling out on Pixel and Galaxy S25 models.
The rollout underscores Google’s lead in the mobile AI race as competitors lag. Amazon’s Alexa Plus is in limited early testing, Apple has delayed its Siri upgrade, Samsung’s Bixby remains secondary to Gemini on its own devices, and OpenAI is more of a general-use artificial intellience. Google’s feature seems a more useful application of AI technology and may benefit those who struggle with technology for either physical or cognitive reasons.