Pixel 10 is an AI-first Phone That Blurs Reality
Google is far ahead of Apple in infusing AI into its devices. But we're quickly entering a world where reality is much fuzzier.
Google unveiled its most ambitious product portfolio yet at its Made by Google event today, showing off its new Pixel 10 phones, Pixel Watch, and Pixel Buds.
The hardware refresh was the usual annual update of spec improvements, but the real star was Gemini. Well, there were human stars to showcase all the tech — Jimmy Fallon hosted a live event with Google execs, and NBA star Giannis Antetokounmpo appeared in a video with Formula 1 driver Lando Norris, where Gemini coached each on how to excel at the other’s sport.
If you want the devices’ specs and details, Google’s official blog, The Keyword, has the breakdown, and several publications like The Verge cover the specs and performance. I’m most interested in the Pixel’s application of AI and how it will influence human behavior.
Google showed off a phone that coaches, nudges, retrieves information, and edits in real-time. Rick Osterloh, Google’s Senior Vice President of Devices and Services, said the company has long been working towards “personal intelligence” with its Pixel line.
When AI doesn’t wait for prompts
In practice, that means surfacing relevant info before you need it. That’s long been the dream of ambient computing.
Google showed off several such features. Magic Cue that surfaces details about your day before you ask. Gemini Live, which just like ChatGPT, can interact with you over video and identify information you show the camera. Voice Translate takes what you say and speaks it in another language. On the Pixel, AI is a doer and enabler of new information connections. And it decides where that work happens—locally for privacy and speed, in the cloud for sheer power.
The camera is where things get even more interesting. The camera coach suggests shots right as you point the camera at the subject. And the AI editing capabilities are even more powerful; you can truly bend reality to your will with just a natural language prompt. Or use Google’s Generative Zoom to fill in the blanks by reconstructing details the sensor didn’t capture.
Every photo will soon be suspect. Were those images captured or conjured?
Putting a tag on reality
The live event leaned on celebrity cameos, slang, and splashy demos to showcase Gemini. One of the more critical issues was buried in a Google blog post, citing the C2PA Content Credentials built into the camera app.
Google’s Tensor G5 and Titan M2 security chip create metadata to document that images were created with AI. It’s an essential step to reestablish authenticity as images become easier to manipulate.
Trust and usefulness are the two threads I commonly hear about when working with and coaching others on how to use AI effectively. Without trust, AI feels like a gimmick that isn’t ready for serious work. AI must be useful and dependable for people to earn their trust.
The Pixel has a tiny market share. Apple will eventually get it together and become more competitive with the application of AI on its devices. This Pixel launch is worth watching not because it will outsell the iPhone, but because it shows how quickly our phones are learning to shape what’s real.


