Make.it: AI, Everywhere, All at Once

Jackson Hatfield
,
Business Development Executive
Innovation

In March, The New Yorker published a column by Kyle Chayka about Silicon Valley’s latest article of faith in our current age of AI. That word is “taste”. Fine. It’s right to call out the clichéd, reactionary language that has inevitably become the AI vernacular. If you’re using the right words, you’re in the know. Fluency in the jargon is akin to fluency in the subject. Supposedly. But we must also be careful not to diminish the sentiments behind the semantics. Because taste is more than the ability to pick the least ghastly output from a carousel of Nano Banana-generated images. 

Taste rests on judgment, and judgment in turn is refined by taste. “Good” taste, then, depends on informed, expert distinctions in a decision-making process. As such, real taste includes restraint; the unfashionable ability to actually just leave something alone. To decide that not every search bar needs a chatbot, not every customer journey needs an agent. That not every clunky service problem can be hidden by draping a large language model over it, as if we were mending a broken chair by covering it in a velvet throw. 

Much of what currently passes for AI strategy is some weird sort of aesthetic compliance. Anxious not to appear analogue, we’re seeing leaders beginning to bolt AI onto products in the same spirit some restaurants added QR codes to tables and calling it hospitality. Just because you’ve expanded what you can do doesn’t mean you’re doing something better. For users, the result is often a denser, busier, more bureaucratic experience. More assistants, more summaries, more help — but less clarity, as the underlying value proposition becomes hidden behind a veil of “cleverness”.

In mid-2025, Pew found that users clicked a traditional search result only 8% of the time when an AI summary appeared, versus 15% when it did not. The links inside these AI summaries were themselves only clicked 1% of the time. Apple even made their AI notification summaries for News & Entertainment apps unavailable in iOS 18.3 after facing intense criticism. Air Canada’s chatbot gave a customer incorrect bereavement-fare information, and were subsequently found liable for negligent misrepresentation by a tribunal. In each case, the AI layer added no clarity to the service. It’s a classic trap: solutions themselves becoming problems.

This is the deeper problem. Perhaps as innovation theatre underpinned digital services in the social media era, judgment theatre will underpin the AI era. This is not anti-AI. But strategy is facing the existential threat of the promptsmith. The people who should be doing the deep, slow, often tedious and yet genuinely challenging work of thinking — about which problems matter, who they matter to, why we’re solving them and what we’re building — are instead generating conceptual confetti. An LLM on-side can widen the field, accelerating research, synthesis, prototyping, pressure-testing, but what it cannot do is supply conviction. 

This is why taste matters. Not in the PR-aware, brand-conscious, curation-as-moat sense that all our favourite evangelists follow. But taste, discernment, and judgment are blindingly necessary under conditions of abundance. It is understanding which ideas to pursue, which to kill, and which technologies to keep away from the customer experience until they are of some actual use. It is knowing that a strategy born from an LLM is not a strategy, but a pale imitation of one.

We are not calling for less ambition. We are calling for common sense. For clarity and purpose. Before adding AI to a service, teams should be able to justify why it makes a service simpler, or faster, or more legible, or actually upgraded at all for a user. Bonus points may be added for doing so without the jargon. But if they cannot, they should not. Today, this is perhaps the most sophisticated sign of taste one can demonstrate. Knowing not where to use the model, but where not to.

What gets lost in the rush to automate is that users are rarely asking for “more AI” in the abstract. They are asking for fewer obstacles, less confusion; services that actually work. A deep understanding of users is a non-negotiable for any digital service, but not as ceremonial, self-congratulatory “customer-centricity” slides in a strategy deck. It should come, instead, from a proximity to people, to users. How they behave, what they resent, what they trust, what they find confusing, why they leave. This grounding is increasingly jeopardised — as AI flattens human behaviour into neat, averaged, probabilistic insights, it’s all too easy to mistake synthetic fluency for actual understanding. Just because it’s valid doesn’t make it sound. 

That kind of knowledge is a privilege to be earned the good ol’ fashioned way: by paying attention! By speaking to people, and observing them, and listening to them, and knowing them. Resisting the temptation to let a model tell you what users want before you’ve done the harder work of simply asking them.

Written by
Jackson Hatfield
Jackson started as an intern and has since become our resident AI aficionado. He supports on growth initiatives, events, and content.
Subscribe for upcoming events
You subscribed successfully.
Oops! Something went wrong while submitting the form.
Got it! We'll get back to you shortly."
Oops! Something went wrong while submitting the form.