Reflexive AI, Reckless Intent?
Reflex over reflection. Speed over sense.
Shopify’s AI memo sets a bold baseline. But are we scaling the right things. In March, Shopify CEO Tobi Lütke sent a company-wide memo with this headline:
“AI usage is now a baseline expectation.”
The message, later shared publicly by professor Ethan Mollick, wasn’t subtle — and that’s kind of the point. It lays out a new internal mandate:
- AI usage will be part of performance reviews
- Every prototype should involve AI
- Teams must justify why they can’t use AI before requesting more headcount
- The policy applies to everyone — including Shopify’s executive team
On the surface, this sounds exciting. Bold. Forward-thinking. It feels like the kind of leadership we say we want: decisive, fast, and all-in.
But here’s the tension: when AI becomes baseline without deeper reflection, speed becomes synonymous with success. And that’s where we risk scaling something dangerous.
This Isn’t the First Time We’ve Declared a New Baseline
We’ve done this dance before.
I watched the same cycle play out before — when UX hit its stride, suddenly everyone needed a UX team. Then came agile. design systems, Figma, “product thinking.” Each time, we rushed to adopt the newest standard — while the deeper work got left behind.
We stopped asking:
- What are we designing?
- Why does it matter?
- Who does it serve — and who does it leave behind?
That’s what I wrote about in Design Isn’t Dead, You Just Sound Dumb. It wasn’t just a rant — it was a warning: declaring the death of design is often a reflection of shallow understanding, not industry truth.
Then in Post-UX Era, I made the case that design has outgrown the boundaries people placed around it. It’s not just about usability anymore. It’s about systems, outcomes, ethics, and impact.
AI fits into this trajectory perfectly — but only if we choose to use it well.
AI Is a Mirror. Not a Mind.
Tobi’s memo states:
“AI acts as a multiplier… We are all lucky to work with the kind of people who contribute 10X of what was previously thought possible. And now the tools become 10X themselves.”
It’s an exciting thought. But here’s the thing about multipliers:
They don’t care what they’re multiplying.
If you feed AI with focused intent, deep insight, and meaningful goals — sure, it can take that further than we ever could alone.
But if you feed it bias, laziness, or poorly defined problems, it doesn’t correct you — it amplifies the damage. It scales assumptions. It reinforces shortcuts. It turns shallow inputs into polished-looking outputs that feel finished — but aren’t thoughtful, ethical, or even accurate.
This is the fundamental mistake I see leaders making: confusing intelligence with alignment. AI isn’t aligned. It’s not strategic. It’s not moral. It’s not curious. It’s a reflection engine — one that mirrors whatever you bring to it.
Multipliers don’t discern. We have to.
If we treat AI like an oracle, we’ll start outsourcing decisions that demand discernment. If we treat it like a mirror, we’ll start asking better questions about what it’s reflecting back — and why.
Because the truth is, AI doesn’t change the need for human judgment.
It raises the stakes of getting it wrong.
Reflex Is Not the Same as Maturity
The word “reflexive” appears throughout Tobi’s memo. The expectation is clear: AI usage should become second nature — as automatic as opening your laptop or checking your calendar.
That sounds efficient. But here’s the problem: Reflex without reflection is just reaction.
Making something second nature isn’t the same as making it second thought. And in a moment where AI is shaping everything from copy to code to product decisions, reflex can be dangerous. It can lead to overconfidence without understanding, output without insight, and activity without alignment.
We need to be careful not to confuse tool fluency with strategic clarity.
Just because someone is quick with prompts doesn’t mean they’re making better decisions.
Just because something is faster doesn’t mean it’s smarter.
Here’s where the memo hits especially hard:
“Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.”
On one hand, that’s a reasonable challenge — push teams to be resourceful. But in practice, it risks becoming a blunt instrument — one that:
- Pressures teams to force AI into places it doesn’t belong
- Dismisses the value of deep, slow, or human-centric work
- Treats efficiency as the only valid metric for resourcing
- Silently punishes the kind of exploration and collaboration that can’t be rushed
That’s not innovation. That’s pressure without perspective.
And ironically, it may lead to people pretending to use AI in the name of compliance, rather than using it in ways that are truly thoughtful, creative, or impactful.
We Shape Our Tools — Then They Shape Us
“First we shape our tools, thereafter our tools shape us.”
— Marshall McLuhan
Shopify’s AI directive is more than a policy shift — it’s a cultural one. It redefines value inside the organization. If AI use is now expected in every prototype, performance review, and planning conversation, then we’ve reshaped the environment people operate within.
But here’s where McLuhan’s insight hits harder:
“The medium is the message.”
The medium here isn’t just AI — it’s reflexive AI use. When that becomes the default, the message isn’t just “AI is helpful.” It becomes:
“Your value is tied to how well you automate yourself.”
And that’s when the tool starts shaping us.
It shapes:
- How fast we believe we should move
- Which questions we skip because the AI already gave us an answer
- Which skills are celebrated — and which ones quietly fade
- Whether we reward thoughtful exploration or just optimized execution
Culture is built on defaults, not mission statements. And right now, the default being designed says: fast is better, automation is essential, and manual is suspect.
If we don’t intervene with intention, this memo won’t just accelerate productivity — it will rewire values. Not in five years. Right now.
What Actually Deserves to Be Baseline
So if AI use is now the floor, what else belongs there with it?
Here’s my answer:
- Critical thinking must be baseline.
- Ethical reflection must be baseline.
- Clarity of purpose must be baseline.
- Design that centers people, not just productivity must be baseline.
- Courage to ask “To what end?” must be baseline.
Because if we don’t ask those questions now, we’ll scale decisions faster than we can correct them. And we’ll mistake velocity for vision.
Design’s Role Has Never Been More Vital
This memo doesn’t mark the death of design. It marks a turning point.
Design isn’t about making things pretty. It’s about deciding what should be made at all. We need product teams who can integrate AI without outsourcing their judgment.
We need researchers who remind us that not everything worth knowing comes from a dataset. We need leaders who understand that AI can generate anything — but it can’t discern what’s worth generating.
Designers belong in this room — not to keep things usable, but to keep things human.
Final Thought: Be Careful What You Normalize
I admire Shopify’s boldness. This memo will push the company forward, no question. But we have to be vigilant about what gets normalized in the process.
If we’re not careful, we’ll build systems that reward reflex but punish reflection.
We’ll elevate output but ignore outcomes.
We’ll adopt a tool that reshapes us — and forget we had a say in how it did.
Let’s not just scale usage. Let’s scale wisdom.
Because AI isn’t going anywhere. But neither is responsibility.