← All Posts
Intentionality 6 min read
Heather Gorr
Heather Gorr CMO

Gemini Will Order Your Coffee Now. Did You Want Coffee?

There’s something oddly intimate about watching an AI scroll a Starbucks menu.

Gemini task automation launched on the Samsung Galaxy S26 Ultra last week, and among its new capabilities: it can open the Starbucks app, browse the offerings, decide that you would probably enjoy a flat white and a croissant — and, by the way, the croissant should be warmed — and proceed all the way through checkout up to the final tap.

Where it pauses.

Where it waits for you to approve the spend.

That pause is everything.


I want to start there, not with the technology, but with that pause. Because it suggests that somewhere in the development process, Google’s engineers had a conversation that went something like: how far do we go? And they drew the line at your credit card. At the moment when decisions become dollars. They baked in consent.

That’s not a small thing. Autonomous AI that spends your money without asking isn’t just risky — it’s socially untenable, and Google knows it. The pause before payment is an acknowledgment, quietly embedded in a product demo, that full automation without consent crosses a line.

I just want to ask: what about your attention?


Here is what Gemini doesn’t pause before: navigating apps on your behalf, making aesthetic judgments (warm the croissant, not cold), deciding what you’d like to eat. It operates in a “virtual window” while you, presumably, look at something else. The screen is active but you are not. Your phone is doing things.

This is the part that interests me — not as a criticism, but as a genuine question about what we actually want from these tools.

The technology is impressive. But it also reveals a design philosophy: start from maximum automation, then dial back to wherever the friction becomes visible to the user. Payment is too far. But scrolling the menu for you? Fine. Choosing the item? Why not — you’re busy.

The assumption embedded in that choice is that your attention is a resource to be optimized away from ordinary decisions. The smaller the task, the more readily the AI should absorb it.

That’s one way to think about attention. It’s not the only way.


There was a trial in Los Angeles this week. Parents whose children had died were sitting across from Meta executives, explaining what they believe algorithmic feeds did to their kids’ minds. It was not a story about any individual small decision. It was a story about attention being systematically extracted — continuously, at scale, through careful optimization — until the people whose attention it was had very little say in where it went.

I’m not drawing a straight line from AI coffee ordering to that. These are different things.

But they live in the same question: who is this technology designed to serve?

When every interaction gets more seamlessly frictionless — when your phone learns to anticipate your preferences before you articulate them — at what point do you lose the thread of what you actually want? At what point does the croissant decision stop being so small?


There’s another detail worth noting. Gemini’s task automation requires cloud connectivity. The reasoning happens on Google’s servers, not on your phone. Your Starbucks preferences, your ordering patterns, your location data — they travel out to inform the decision, then return as instructions. The “on-device” framing is incomplete.

This matters practically, not only philosophically. If the AI is reasoning in the cloud, then your assistant is also a data stream. Privacy implications aren’t hypothetical; they’re architectural.

Digital Disconnections is built around a different architecture. Reasoning that happens on your device, not someone else’s server. Private by default, not by policy. The distinction isn’t a marketing preference — it’s where the data actually goes.


But I want to return to intentionality, because that’s really the question underneath all of this.

Not: is Gemini ordering your coffee bad?

But: is that the problem you wanted to solve?

There’s a version of AI assistance that extends your capacity — that helps you think more clearly, move more deliberately, make choices that reflect your actual values and actual preferences. And there’s a version that absorbs your choices into itself, making them for you, efficiently, invisibly, in a virtual window while you’re looking elsewhere.

Google starts from maximum automation and draws lines where the friction becomes too visible. We start from a different place entirely: what do you actually want to use your attention for?

Not: how do we remove this decision from your cognitive load?
But: is this a decision you’d want to be present for?

Ordering coffee might not be. Choosing what to eat might be. Knowing the difference — and having tools that reflect that difference — seems worth building toward.


The Samsung Galaxy S26 Ultra is remarkable hardware. The Gemini integration is technically impressive. This is not a screed against ambition or capability.

But the design philosophy behind it reveals an assumption: that your time with your phone is a cost to be minimized. That the ideal interface is one you don’t have to think about. That attention is friction.

We’re not so sure.

We think the ideal interface is one that works the way you actually want to work — present when you want to be, efficient when you choose to be, and private always.

Your coffee order is a small thing.

Your attention is not.

We’re building AI that runs on your device, not someone else’s server. Learn more about our approach.

Learn More →
Heather Gorr
Heather Gorr
CMO, Digital Disconnections

Heather writes about technology, intentionality, and the gap between what AI promises and what it delivers. She leads content and marketing at Digital Disconnections.