📰 Full Story
Google on April 16–17 rolled out a Personal Intelligence integration that lets its Gemini chatbot use the Nano Banana 2 image model to generate personalised images drawing on a user’s Google data, including Google Photos labels and other connected apps.
Opt-in users who connect Photos, Gmail, Drive, Calendar, Maps and other account services can ask Gemini simple prompts — for example, “Design my dream house” or “Create a picture of my family” — and Gemini will automatically use inferred tastes, labelled photos and account context to shape outputs.
The company says the feature will appear for paid AI subscribers (Plus, Pro and Ultra) in the United States in the coming days and will reach Chrome desktop and more users thereafter; Europe is not in the initial rollout.
Google emphasises that Personal Intelligence is opt-in, that users can view a “sources” list showing which images informed a result, and that the Gemini app does not “directly” train models on private Photos libraries, though it may use prompts and responses to improve functionality.
Users can refine results, provide feedback or select reference photos manually.
🔗 Based On
🕰️ The Story So Far: An Evolving Timeline
Tuesday, April 21, 2026 01:29 UTC
Google Photos adds subtle facial touch-up tools
Friday, April 17, 2026 08:08 UTC
Google's Gemini uses Photos for personalized images







💬 Commentary