*Subtitle: Constraint craft is the new scarce currency once intelligence is affordable for everyone.*

There’s a specific kind of quiet you hear right before a new abstraction takes over an industry. It’s the hush in a lecture hall before the punchline lands, the pause in a war room before someone says the thing nobody wants to say out loud, the little vacuum that forms when the old ranking system—who’s smartest, who’s fastest, who’s most credentialed—starts to feel antique.

Jensen Huang’s version of that moment, delivered to a room of students at Cambridge, is blunt enough to be useful: “Intelligence is about to be a commodity.”

If you’re a working CTO or tech lead, your first response probably isn’t existential panic. It’s inventory management.

If intelligence becomes a commodity, then:

  • code generation becomes baseline,
  • summarization becomes plumbing,
  • “analysis” becomes a button,
  • and an enormous fraction of what used to look like skill becomes UI.

So what’s left?

Huang’s answer (as the story now circles the internet) is taste: the ability to choose well in the presence of infinite options, and—more viciously—the ability to choose what not to do.

I want to take that framing seriously, but also treat it like a professor treats a promising theorem scribbled on a napkin: assume it might be true, then stress-test it.

Because “taste” is a slippery word. People use it to mean aesthetic sensibility, or vibes, or brand, or the ability to buy the correct chair. But there’s a harder, more engineerable definition hiding inside it:

Taste is the discipline of building correct constraints.

And in the AI era, constraints are the only scarce resource that matters.

---

1) Commodity intelligence isn’t a metaphor; it’s an economic event

We’ve been here before in other forms.

When electricity became cheap and widely available, the competitive advantage didn’t go to the company with the smartest candle-makers. It went to the company that redesigned factories around motors and power distribution. When the internet made information abundant, the advantage didn’t go to the person who could memorize encyclopedias. It went to the person who could filter, route, compress, and act.

Andrew Ng famously compared AI to electricity—a general-purpose capability that spreads into everything, changing workflows more than it changes job titles.

But the deeper technical reason “intelligence” trends toward commodity pricing is something reinforcement learning people have been muttering for years: scale beats handcrafted cleverness.

Rich Sutton calls this “The Bitter Lesson”: in the long run, methods that leverage general computation—search and learning—win, even if the human-centric approach feels more elegant.

Translation for builders:

  • The market rewards systems that keep improving when you pour compute + data into them.
  • The market punishes systems that require genius babysitting.

So yes—intelligence becomes tap water not because intelligence is trivial, but because the delivery mechanism becomes standardized. The “smartness” migrates into infrastructure.

You can see the physical shadow of this everywhere now. The world is spending obscene amounts of capital on data centers, because AI demand is compute demand with a nicer haircut. And the constraint isn’t only chips—it’s power, cooling, grid capacity, site permitting, transmission lines, and all the slow, stubborn atoms that refuse to scale like software.

The International Energy Agency projects global data-center electricity consumption could roughly double by 2030 (to around 945 TWh in its base case), with AI as a major driver. The U.S. Department of Energy (via Lawrence Berkeley National Laboratory) similarly describes rapid growth in data-center load and expectations that it may double or triple in coming years.

And in one of the more perfect parables of our moment, Satya Nadella has publicly complained that Microsoft has AI GPUs sitting idle because there isn’t enough electricity to plug them in.

So when we say “intelligence is becoming a commodity,” we’re not describing a philosophical mood. We’re describing an industry converting cognitive work into capital expenditure + operating constraints.

That’s what commodities are: standardized inputs, broadly available, priced by logistics.

Which means the next question becomes brutally practical:

If intelligence is abundant, what differentiates builders?

---

2) “Poorly defined work” is just work with missing constraints

The most useful part of Huang’s framing is the distinction between defined and poorly defined work.

Defined work has stable requirements and measurable outputs. It’s the kind of work you can turn into a benchmark. AI is increasingly good at this kind of thing because benchmarks are training data wearing a lab coat.

Poorly defined work is work where the requirements are incomplete, contradictory, political, emotional, or changing in real time. It’s work where the question *is* the product.

This isn’t mystical. It’s a simple property of systems:

  • If the constraints are known, optimization is automatable.
  • If the constraints are unknown, the job is to invent constraints.

That invention process is what “taste” actually is.

And for CTOs, taste isn’t primarily about making things pretty. It’s about making tradeoffs that don’t collapse at scale.

The canonical Huang quote on strategy is the one every senior engineer eventually learns the hard way: strategy is choosing what not to do.

In a world where an AI can generate 100 designs, 50 architectures, 20 roadmap variants, and a complete PRD before lunch, the limiting reagent is no longer output. It’s selection.

If you’re average at selection, you will drown in plausible options.

If you’re excellent at selection, you will look almost supernatural—not because you have secret information, but because you have better constraints.

---

3) Taste, redefined: the art of constraint design

Let’s define taste in a way that you can actually use in an engineering org:

Taste is the practiced ability to select constraints that maximize long-term system value under uncertainty.

That’s it. No incense. No berets.

Taste shows up as:

  • knowing which metrics are fake comfort and which metrics are real pain,
  • knowing when “more features” is cowardice disguised as ambition,
  • knowing when performance work matters and when it’s ego,
  • knowing what to log, what to measure, what to ignore,
  • and knowing what to delete.

Taste is also the opposite of cargo-culting. A lot of modern tech leadership is just cosplay: people dressing up in the rituals of high-performing companies without understanding the underlying physics.

First principles thinking—Huang’s other obsession—matters because it’s a way to escape cosplay. You reduce the problem to invariants: latency, throughput, coordination costs, human attention, incentives, time.

The specific tech stack is rarely the invariant. The invariant is: what are we optimizing for, and why?

So yes, AI makes intelligence cheap. But it also makes self-deception expensive, because the machine will happily accelerate your nonsense.

If you ask it for a plan without knowing what you want, it will give you a plan—confidently, fluently, and wrong in the exact shape of your confusion.

Taste is what prevents that.

---

4) Four additions CTOs must bolt onto the “taste” thesis

The “taste” story is compelling, but incomplete for people who ship software in the real world—where security reviews exist, regulators exist, and your cloud bill shows up like a monthly horror novella.

So here are four additional items that belong in the survival kit. They’re not separate from taste; they are the ways taste becomes operational.

A) Tempo: taste isn’t a decision, it’s a decision loop

A surprising amount of competitive advantage comes from how fast you can update your beliefs.

John Boyd’s OODA loop (Observe–Orient–Decide–Act) became famous because it frames competition as getting inside the opponent’s decision cycle.

For CTOs, the opponent is rarely another company in a clean head-to-head fight. The opponent is:

  • the market changing,
  • your own technical debt,
  • the slow drift of requirements,
  • and the internal bureaucracy you accidentally built while trying to “scale.”

In AI-saturated environments, tempo becomes even more important because:

  • you can prototype faster,
  • you can test ideas faster,
  • you can generate variants faster,
  • and therefore you can get lost faster.

So taste must become a loop, not a proclamation.

Operationally, this means:

  • shorter feedback cycles from production,
  • faster incident learning,
  • disciplined postmortems,
  • and the ability to reverse decisions without ego.

(Notice how quickly this stops being a “design” conversation and becomes an org design conversation. That’s not an accident.)

B) Trust and risk: taste now includes governance as a product feature

In 2025 and beyond, “taste” that ignores AI risk management is just a fast way to become a cautionary tale.

NIST’s AI Risk Management Framework is explicit about what “trustworthy AI” entails—characteristics like validity, safety, security, transparency, privacy, and fairness—plus a structured approach to managing those risks.

In Europe, the AI Act introduces risk-tiered obligations and bans certain “unacceptable risk” practices; the EU has also published guidance on prohibited practices. And the regulatory timeline has been actively debated, including proposals to delay certain “high-risk” rules—meaning the compliance landscape is itself a moving target.

For CTOs, this changes the definition of “good taste”: good taste is building systems that can explain themselves under pressure.

Not in the vague “explainable AI” marketing sense, but in the concrete sense:

  • What data did we use?
  • What assumptions did we encode?
  • What harms did we anticipate?
  • What monitoring do we have in place?
  • What happens when it fails?

When intelligence is cheap, trust becomes the premium tier.

C) Physical constraints: intelligence is abundant, but watts are not

A world that runs on models is a world that runs on electrons.

This is not a poetic flourish. It is a limiting factor that shapes strategy.

We already mentioned the IEA projections about data-center electricity demand growth. But the texture of the constraint is showing up in politics and finance: utilities planning huge generation expansions to serve data centers, private equity stampeding into data-center deals, and national conversations about grid infrastructure.

Even the Financial Times has been mapping the looming “AI power” crunch as a strategic constraint.

So CTO taste now includes a kind of energy literacy:

  • understanding that optimization is not only about latency but also about cost per token,
  • that “just call the model” becomes a budgeting model,
  • that caching and retrieval aren’t boring—they’re survival,
  • and that architecture decisions have literal thermodynamic consequences.

If you’re leading an “AI everywhere” initiative, the question isn’t “can we?” It’s “can we sustainably afford this at scale?”

D) The AI operating model: taste must become a learning system, not a hero

One of the most damaging myths in tech leadership is the heroic architect: the lone person with perfect judgment who makes the right calls.

That myth dies in the AI era, because there are too many decisions, too fast, across too many surfaces.

The org that wins is the org that turns decisions into a learning machine—an “AI factory” operating model where data and models continuously improve processes and products. That general frame has been popularized in management literature about competing in the age of AI: the idea that advantage comes from building systems that scale learning across the organization, not just scaling headcount.

Translated into CTO language:

  • Your architecture should capture feedback by default.
  • Your product should be instrumented for learning, not just analytics dashboards.
  • Your team should treat deployment as the beginning of discovery, not the end of delivery.

Taste, at scale, is what remains when heroics are replaced by loops.

---

5) The uncomfortable part: scars as data, not identity

The original piece you shared leans into suffering: the idea that AI hasn’t had a broken heart, hasn’t failed, hasn’t endured the humiliation of building something real.

There’s truth there, but it’s easy to turn it into macho mythology: pain as proof of legitimacy.

A better framing—more useful, less melodramatic—is this:

Humans learn from embodied consequence.

We build a private dataset of “what happens when…” that’s not fully representable as text. It’s not only knowledge; it’s calibration. It’s the difference between reading about outages and remembering the smell of your own adrenaline at 03:12 when the pager goes off and the graphs look like a cardiogram in a horror movie.

Huang has also talked openly about sacrifice and the burden of leadership, describing strategy itself as a form of sacrifice—choosing what not to do.

For CTOs, the point isn’t to romanticize suffering. The point is to convert experience into better priors:

  • better instincts about operational risk,
  • better empathy for users under stress,
  • better skepticism toward “perfect” plans,
  • better ability to sense when a system is lying to you.

AI can simulate. Humans can remember.

Taste is, in part, the difference.

---

6) A CTO’s playbook for cultivating taste in the commodity-intelligence era

If I were teaching this as a Stanford-style course (but with fewer midterms and more production incidents), I’d give you seven practices. Not because seven is magical—because it’s the number of things you can remember while Slack is on fire.

  1. Treat AI output as a proposal, never as a conclusion
- Make “What would change my mind?” a standard question in design reviews. The model is fast; you are responsible.
  1. Build a “constraint ledger” for every major initiative
- Capture the invariants (latency, privacy, budget, staffing reality), the non-goals, the explicit tradeoffs you’re accepting, and what would force a reversal. This is taste made visible.
  1. Practice strategic subtraction as an engineering ritual
- Once per quarter: delete something (a feature nobody uses, a service nobody trusts, a dashboard nobody reads, a metric that incentivizes nonsense). Taste grows when you get good at removal.
  1. Make feedback loops the default architecture
- If you cannot measure it, you cannot learn from it. Instrumentation is not bureaucracy; it’s your nervous system.
  1. Build trust like it’s a performance metric
- Adopt a risk framework (NIST AI RMF is a solid reference model), and map it to your actual system: data flows, model behavior, monitoring, incident response.
  1. Learn the physics of your AI bill
- Understand token economics, caching, retrieval augmentation, batching, and where you can swap expensive intelligence for cheap computation. Remember: watts are not infinite.
  1. Optimize for tempo, not theatrics
- Speed is not rushing. Speed is tight cycles: ship → observe → revise. OODA is not a slogan; it’s an operating system.

---

7) A final image: the editor at the edge of the flood

Picture the near future: your team can generate code, tests, docs, UI variants, and launch plans at ludicrous speed. A thousand options per hour. The output is not the bottleneck.

The bottleneck is the human standing at the edge of the flood, choosing which river to divert, which levee to build, which neighborhood to protect, which structure to abandon.

That person is not “smarter” in the old sense. They are better at choosing constraints. They have better taste.

And taste, inconveniently, is earned the slow way: by building, shipping, breaking, learning, and returning to first principles when fashion starts to feel like truth.

Intelligence is becoming cheap. The world will not become simple. It will become faster.

So the job of an innovative CTO is not to outrun the machine.

It’s to steer.