You already know the familiar arguments. TypeScript buys you stronger guarantees and better tooling. JavaScript buys you fewer moving parts and directness. That old debate hasn't vanished; it's just been reframed by three shifts that matter in 2025:

  1. AI now sits in the editor, refactoring and scaffolding code faster than we type.
  2. Modern runtimes and toolchains have changed the cost of adopting (or avoiding) TypeScript.
  3. The language standards process hints at a future where "type-ish" syntax and JavaScript become friendlier neighbors.

This is a pragmatic review for the AI-augmented developer: what's changed, what's signal vs. noise, and where each choice wins.

The world your code lives in has moved

Runtimes are less allergic to TypeScript. Deno runs .ts files directly with built-in type checking and compilation, no extra setup. Bun executes TypeScript out of the box—transpiling on the fly with its native pipeline. Even Node has edged forward: Node.js 23+ now runs .ts files natively via type-stripping, and by v23.6+ it's on by default (no type checking—use tsc for that). This narrows the historical "setup tax" that once made TypeScript feel heavy for small services and scripts. Tooling treats type checking as a separate lane. Vite, the de-facto dev server for modern front-ends, transpiles TypeScript with esbuild (very fast) but doesn't do type checking during that hot path. The official advice is: let your IDE and a parallel tsc --noEmit (or a plugin) enforce types while the dev server stays snappy. This separation (transpile fast, check in parallel) is now conventional wisdom. Language evolution has reduced friction. TypeScript aligned to the standardized ECMAScript decorators in 5.0, cutting over from the long-standing experimental variant. That removes one notorious compatibility wobble with frameworks and transpilers. And the future is flirting with "types as comments." TC39's "Type Annotations" proposal (often called "types as comments") remains Stage 1. It's not real JavaScript yet, but the intent is clear: allow type syntax that engines will ignore while external checkers analyze it. If this crosses the river in the next few years, the psychological distance between JS and TS will shrink even more. For now, it's a trendline, not a plan you should bank on.

Where TypeScript stands right now

  • Latest stable is 5.9.x. npm shows 5.9.3 as the current release; Microsoft's 5.9 announcement notes that upgrading to 6.0 should be largely API-compatible. There is no GA 6.0 yet.
  • What 6.0 will likely mean: the team has been working toward a native/alternative implementation and used 2025 blog posts to flag that 6.x will bring some deprecations to align with that direction. Treat 6.0 as a cleanup/compatibility milestone rather than a new language paradigm.
  • Readiness for "6.0": Expect low-drama upgrades from 5.9 → 6.0; Microsoft says compatibility should be high. Keep an eye on deprecation warnings in 5.9 release notes.

Plan for 5.9 today, keep an eye on 6.0 (low-drama upgrade expected), and lean on the runtimes' native TS handling to simplify dev loops.

AI is the new teammate—and types are its compass

With Copilot-style assistants, the question is less "Can I write this function?" and more "Can the machine keep my codebase coherent as it scales?" Controlled experiments show AI copilots can improve task completion speed; qualitatively, teams report that types give AI a scaffold for safer edits and more accurate navigation—fewer "clever but wrong" suggestions when the constraints are explicit. Think of types as railings on a dark staircase: humans can feel their way without them, but the railings help everyone move faster and fall less.

Concretely, we see three compounding effects in AI-assisted shops:

  • Refactors are braver. Renames and signature changes propagate across the graph without as much dread; the assistant plus the checker catch more edges.
  • Prompting is crisper. "Make fetchInvoices return ReadonlyArray and preserve the branded InvoiceId type" is not poetry, but it's unambiguous instruction both for an AI and the compiler.
  • Onboarding accelerates. New contributors (human or model) infer domain intent from types and JSDoc more reliably than from a folk taxonomy of filenames.

If you're using AI to push broad refactors through a codebase, TypeScript is a force multiplier. If your code is short-lived—scripts, glue, experiments—JS with JSDoc types and @ts-check may hit the sweet spot of speed with enough safety.

Size matters: A practical framework

Most "large" projects are actually medium. Use these axes to size yourself: people actively committing, lifetime of the code, surface area (packages/services), and API exposure (internal vs public).

Small

Definition: 1–2 devs; lifespan ≤ 6–9 months; ≤ 10k LOC; 1–3 deployables; internal users; API surface not public/long-lived. Defaults that work in 2025:
  • Language: JavaScript + JSDoc + // @ts-check for editor smarts with near-zero ceremony. Promote to TS later if it grows. (TS docs treat JSDoc-types as first-class.)
  • Runtime: Node 23+ (native type-strip) or Bun/Deno for instant feedback loops.
  • Validation: schema at the edges (Zod/Valibot/etc.); don't over-model internals.
  • Build: none (Node/Bun) or minimal (Vite for front-end). If using Vite, keep type-checking separate.
  • AI angle: let the assistant add JSDoc and drive "ts-check" fixes; you keep moving.
When to upgrade to TS: the moment the app must be maintained beyond a quarter, or another dev joins and you begin doing real refactors.

Medium

Definition: 3–10 devs; 9–36 months lifetime; 10k–75k LOC; 2–10 packages/services; multiple environments; some stable internal APIs. Defaults that pay off:
  • Language: TypeScript (strict). Use exactOptionalPropertyTypes, noUncheckedIndexedAccess, and prefer the satisfies operator for configs and literals.
  • Runtime: Node 23+ (native type-strip for dev), or Bun/Deno; still run tsc --noEmit in CI and locally (watch) for real checking.
  • Build: Vite/ESBuild/SWC for transpile speed; checker in parallel (vite-plugin-checker or tsc).
  • Structure: single repo, clear "boundary DTOs" at edges, domain types inside.
  • Docs & DX: API contracts via OpenAPI/JSON-Schema with generated types.
  • AI angle: do type-first refactors—change the types, compile, fix sites. The compiler becomes the map.
Smells you've drifted "large": weekly cross-team merges, public SDKs, frequent breaking-change discussions. That's your cue to adopt "large" controls.

Large

Definition: 10–50+ devs; multi-year; ≥ 75k LOC (often 200k+ across packages); 10–100 packages/services; public APIs/SDKs; regulated domains; formal release trains. Defaults to institutionalize:
  • Language: TypeScript, strict everywhere.
  • Monorepo & builds: Project References + tsc -b with composite, declaration, declarationMap. This enables incremental builds and precise dependency graphs.
  • Public surface governance: adopt API Extractor to lock/type-review exported APIs; gate CI on API reports.
  • Runtime/Dev: Node 23+ (or Bun/Deno) for local loops; CI still runs full checks and produces .d.ts + artifacts.
  • Tooling: lint + format + type-check in separate steps; cache builds (Nx/Turbo) and automate references if needed.
  • Contracts: schema-first at service boundaries; versioned types; deprecation policy.
  • AI angle: let the assistant write and enforce codemods guided by type errors; reviewers focus on intent, not plumbing.

What's new enough to change your mind in 2025

1) Decorators you can actually standardize around

TypeScript 5.0 implements the standardized decorators proposal. Frameworks and libraries can converge on one story; you don't have to live in fear of "legacy decorators vs. modern decorators" footguns. If decorators are in your architecture (Angular, Lit, or your own metaprogramming), this alone removes a major reason some teams hedged in the past.

2) From "TypeScript compiles JavaScript" to "engines erase types"

Between Deno/Bun and Node's recent type-stripping, the runtime picture is simpler. You can treat TypeScript as syntax that tooling erases, not a second universe you must constantly appease. And while Node's support is focused on stripping rather than checking, that's still enough to reduce yak-shaving in small services and CLIs.

3) The build loop is lightweight (if you let it be)

Vite's docs make it explicit: transpile only in the dev server; run tsc --noEmit (or use a checker plugin) for errors. This keeps hot reload blazing fast even on big projects. Old complaints about "TypeScript slows my HMR to a crawl" often trace back to forcing the checker and the dev server to share a single pipeline. Separate them; the problem mostly evaporates.

4) TypeScript itself keeps trimming the fat

The TS team's steady push lately has been "smaller, simpler, faster." That arc includes the standardized decorators work and significant performance efforts—plus ongoing experimentation with native/transpiled implementations that promise speedups. This matters if your codebase is large; the compiler increasingly gets out of your way.

Where the frameworks are

React's official docs present TypeScript as the default way to express props and hooks; Next.js scaffolds with TypeScript automatically and positions itself as "TypeScript-first." In practice, most modern front-end starters assume you'll be on TS, even if they still tolerate .js. That isn't ideology; it's what most teams have chosen.

Survey data backs this up. JavaScript remains the most-used language overall, but TypeScript's share keeps climbing and is now used by a very large subset of professional developers (and a larger share among front-end and full-stack teams). If you build UIs or Node services professionally, the gravitational pull is toward TS.

The middle path: JavaScript with JSDoc types

There's a durable, underappreciated option: write JavaScript, annotate with JSDoc, and enable // @ts-check. You get many of the ergonomics—intellisense, basic safety, better editor help—without committing to .ts files or a compile step. For small libraries, quick scripts, and code that lives near the edges (lambda handlers, worker glue), this is a pragmatic compromise. Teams that choose this tend to:

  • Keep runtime validation as the real contract (e.g., Zod or Valibot) and use JSDoc to keep editors honest.
  • Publish plain .js (no build pipeline) while shipping .d.ts for consumers who want types.
  • Use eslint/biome to enforce consistency and rely on @ts-check for "enough" correctness.

Documentation from the TypeScript project itself treats this as a first-class path; it's not a hack.

Runtime truth still lives at runtime

No static type system removes the need to validate inputs at the boundary. The pattern that ages best in 2025 is:

  • Schema at the edge (Zod, Valibot, TypeBox, etc.) for runtime validation and parsing, generating types from schemas or vice-versa.
  • Types in the core for AI guidance, refactors, and developer speed.
  • OpenAPI/JSON-Schema as the lingua franca between services.

This is where AI actually shines: it's uncannily good at drafting schemas from examples and keeping types consistent across modules—so long as your project already speaks in schemas and types.

Decision heuristics for 2025 (AI in the loop)

Use this as a litmus test—assume an AI pair-programmer is always present.

Choose TypeScript when…
  • The codebase will sustain more than one team or several quarters of growth.
  • You expect regular large refactors (domain reshaping, cross-cutting concerns).
  • You build APIs/SDKs for others and want confidence in the public surface.
  • You're living in React/Next/SvelteKit/Astro or a Vite-based ecosystem that's already optimized for TS.
Choose JavaScript (+ JSDoc + @ts-check) when…
  • You're writing small services, CLIs, or scripts where fast iteration and simple deploys matter more than deep type modeling.
  • You hand off code to mixed-skill teams where a compile step is a barrier.
  • You're targeting edge runtimes or lightweight workers and want minimal artifacts—JS today, stronger types tomorrow.
Choose either confidently when…
  • You run on Deno or Bun. Both handle TypeScript with minimal ceremony; both run JavaScript perfectly. Your decision becomes about culture, not friction.

Patterns that play beautifully with AI

  • Schema-first contracts: Define Zod/JSON-Schema for inputs and outputs, generate types, and let the assistant wire them through your handlers and tests.
  • Boundary DTOs + narrow domain types: Keep "big" types at the edges (API, DB) and "small, descriptive" ones inside. AI has less room to improvise incorrectly.
  • Type-driven refactors: Ask the assistant to change types first, run the checker, then fix sites. This "types-then-repairs" loop is more reliable than code-then-types.
  • Prefer satisfies for configs: In TS code, const cfg = {...} satisfies Config keeps literal inference while guaranteeing shape. This prevents a legion of subtle config bugs and improves AI completions.

Anti-patterns that age badly

  • Long-running, type-free services that silently widen contracts until no one knows what the data looks like. (AI will make changes with confidence… and sometimes with confidence alone.)
  • Monolithic any escapes to "unblock" a sprint. These metastasize. If you must, hide them behind small functions and document why.
  • Forcing type checking into the dev server's hot path. Don't pay that latency tax unnecessarily; keep builds fast and checks parallel.

A quick reality check on costs

TypeScript is not "free safety." You pay in:
  • Modeling time. Someone has to design types that express your domain without turning every object into an algebra problem.
  • Tooling surface. Linters, formatters, tsconfig variants—each adds a knob that someone might turn wrong.
  • Training the humans. Types enforce clarity; they also reveal ambiguity that comments were papering over.
JavaScript is not "free speed." You pay in:
  • Implicit contracts. Without types or schemas, every function is a trust fall.
  • Refactor risk. The moment you scale past a couple thousand lines, the cost of "find-replace and hope" compounds.

AI changes the slope of both curves—making TypeScript easier to adopt and JavaScript easier to keep honest—but it doesn't remove tradeoffs.

One-page prescriptions you can drop into a repo

Small (tsconfig.json idea)

Use JS with // @ts-check; add types: ["node"] in jsconfig.json. Keep buildless where possible. Vite/React? Let Vite transpile; run tsc --noEmit only in CI.

Medium

{
  "compilerOptions": {
    "strict": true,
    "exactOptionalPropertyTypes": true,
    "noUncheckedIndexedAccess": true
  }
}

Prefer satisfies for config objects; schemas at I/O; Node 23+ or Bun/Deno for local runs; CI: tsc --noEmit + tests.

Large

Workspace with Project References; composite, declaration, declarationMap; API Extractor with CI gating; per-package tsconfig baselines; release notes generated from API diff.

Recommendations by scenario

Greenfield front-end or full-stack app (React/Next/Vite)

Default to TypeScript. It's the paved road, and your dev server won't suffer if you separate transpilation from checking. If you're building design systems or public components, TS isn't optional; it's your public spec.

Back-end services with Deno or Bun

Either language works; the setup delta is tiny. If the service is small and ephemeral, JS with JSDoc is fine. For anything with domain weight or shared libraries, choose TS.

CLI tools, workers, and glue code

Prefer JavaScript with JSDoc and @ts-check. Keep deploys dead simple; keep runtime schemas strict. If the tool becomes popular or complex, promote to TypeScript in a weekend.

Libraries/SDKs you'll publish

TypeScript. Your users' autocomplete and compile errors are your brand. Ship .d.ts faithfully (or generate them) and treat the type surface as API. React/Next consumers will expect it.

Where this is heading

The long bet is convergence. Runtimes are making TypeScript easier to run. Toolchains are encouraging fast transpilation with parallel checking. The standards process is exploring type syntax that JavaScript engines can ignore safely. Frameworks write docs in TypeScript first and fall back to JavaScript second, not the other way around. The practical meaning of "I write JavaScript" is drifting toward "I write JavaScript with types somewhere nearby."

Your north star is unchanged: make invariants legible. Whether you spell them as TypeScript annotations, JSDoc comments, or runtime schemas, the goal is the same—turn tacit knowledge into explicit shape so that both humans and machines can collaborate productively.

The strange gift of 2025 is that you no longer have to choose purity. Use TypeScript where it compounds your team's leverage, JavaScript where it keeps you nimble, and let AI bridge the gaps with guardrails instead of guesswork.

Conclusion: Size yourself ruthlessly, then choose

The question isn't "TypeScript or JavaScript?" It's how explicit your invariants must be for humans and machines to collaborate safely.

AI makes both paths faster; types make the speed survivable. Size yourself ruthlessly using the framework above (Small/Medium/Large), then pick the lightest setup that still lets you refactor boldly.

The fastest teams in 2025 aren't debating language purity. They're shipping features with AI assistance, catching errors with types or schemas, and running code on runtimes that finally respect both choices.

---

About the Author: Odd-Arild Meling has been writing production JavaScript since before Node.js existed and TypeScript since before it was cool. He's migrated codebases from JS to TS, TS to JS, and back again—always with real users waiting on the other side. Currently building edge-first architectures at Gothar where both languages ship to production daily.