Introduction
TypeScript is where reliable types meet modern JavaScript development. If you are using Claude Code to accelerate your day to day, you can turn the model into a productive pair-programmer that respects your types, your build constraints, and your framework conventions. This guide distills claude code tips for TypeScript into practical patterns you can apply immediately, with an emphasis on type-safe design, maintainability, and measurable outcomes.
Whether you are building a React UI, a Node or Edge API, or a full-stack monorepo, the same principle holds: the more specific your constraints, the higher the quality of the generated code. With a few targeted prompts, some compiler settings, and a short checklist, you can get consistent results and track how your AI-assisted workflows evolve over time using Code Card.
Language-Specific Considerations for TypeScript
TypeScript is structurally typed, which gives the model flexibility but also increases the chance of accidental compatibility. Guide the model with explicit types, crisp contracts, and compile-time checks. The following best practices ensure Claude respects your domain types and framework boundaries.
Pin your compiler contract
Enable strictness options that surface problems early and give the assistant an unambiguous target. Start with a tsconfig like this:
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "Bundler",
"strict": true,
"exactOptionalPropertyTypes": true,
"noUncheckedIndexedAccess": true,
"useUnknownInCatchVariables": true,
"noImplicitOverride": true,
"skipLibCheck": true,
"isolatedModules": true,
"jsx": "react-jsx",
"types": ["node", "jest"]
},
"include": ["src"]
}
Point Claude to your tsconfig before asking for code generation. Add a short preamble like: "Target ES2022 modules, strict mode, JSX runtime react-jsx, moduleResolution Bundler". This narrows the search space and improves correctness.
Prefer unknown over any, and constrain generics
When asking the model to create abstractions, specify generic constraints and avoid any. For example, request <T extends Record<string, unknown>> instead of open-ended types. If you see any in an output, prompt for a type-safe alternative and mention that unknown must be narrowed.
Use type-first contracts with Zod or TypeBox
Type-first schemas help Claude generate parsers, runtime validators, and inferred types that stay in sync. Ask the model to produce Zod schemas and derive TypeScript types via z.infer<typeof Schema>. This reduces mismatch between runtime data and static types.
Framework-specific guidance
- React and Next.js - Request
React.FCavoidance, use explicit props types, and includeSuspenseoruseconstraints if you use Next 13+ App Router. Ask forimport typefor type-only imports to reduce bundle size. - NestJS or Express - Specify DTOs and validation strategy (
class-validatoror Zod). Ask for dependency injection-friendly code, no singletons where unnecessary. - Prisma - Ask to use
PrismaClientat request scope in serverless, or a shared instance in long-lived processes. Provide the schema snippet for accurate types.
Show the model your domain types early
Paste the types you want to keep stable, then ask for implementations that satisfy them. Use the TypeScript satisfies operator to force exactness and catch property drift.
Key Metrics and Benchmarks
To turn claude-code-tips into lasting improvements, instrument your work. Track metrics that reflect TypeScript quality, speed, and stability.
Suggestion acceptance rate
- What to measure - Fraction of model suggestions applied without major edits.
- Targets - 60 to 80 percent for straightforward tasks, 30 to 50 percent for exploratory refactors or unfamiliar APIs.
- Anti-pattern - 100 percent acceptance usually signals rubber-stamping. Introduce tests or stricter types and ask the model to iterate until checks pass.
Type error density
- What to measure - New TypeScript errors per 200 lines changed.
- Targets - Less than 1 error per 200 LOC after one iteration, less than 1 per 500 LOC after the second iteration.
- Tip - Ask Claude to run a mental compile step: "List possible ts errors given strict, exactOptionalPropertyTypes, isolatedModules" before showing code.
Test pass rate and time-to-green
- What to measure - Fraction of tests passing in the first run after applying a suggestion, minutes to green build.
- Targets - 80 percent first-pass when adding small features, under 5 minutes median to green in CI.
- Tip - Provide the testing stack explicitly: Jest or Vitest, React Testing Library, supertest for APIs.
Token-to-change efficiency
- What to measure - Tokens consumed per accepted line of code or per resolved issue.
- Targets - 15 to 60 tokens per accepted LOC for boilerplate-heavy tasks, lower for repetitive scaffolding often cached through snippets.
Refactor footprint and churn
- What to measure - Percentage of changed lines that revert within 48 hours.
- Targets - Keep churn under 10 percent for refactors, under 5 percent for bug fixes.
Practical Tips and Code Examples
Prompt scaffolds that Claude reliably follows
Role: Senior TypeScript engineer
Context:
- Strict tsconfig (ES2022, moduleResolution Bundler)
- React + Vite, React Testing Library
- Zod for runtime validation
- Do not use any, prefer unknown with narrowing
Task:
Create a type-safe fetch helper and an example API call.
Return: one code block per file, no comments, small focused changes.
Type-safe fetch client with Zod
import { z } from "zod";
const User = z.object({
id: z.string().uuid(),
email: z.string().email(),
name: z.string(),
});
export type User = z.infer<typeof User>;
export class ApiError extends Error {
constructor(public status: number, message: string) {
super(message);
this.name = "ApiError";
}
}
export async function getJson<T>(url: string, schema: z.ZodSchema<T>, init?: RequestInit): Promise<T> {
const res = await fetch(url, { ...init, headers: { "Accept": "application/json", ...(init?.headers ?? {}) } });
if (!res.ok) {
throw new ApiError(res.status, `Request failed: ${res.statusText}`);
}
const data: unknown = await res.json();
const parsed = schema.safeParse(data);
if (!parsed.success) {
throw new Error(`Validation error: ${parsed.error.message}`);
}
return parsed.data;
}
// Example usage
export async function getUser(id: string): Promise<User> {
return getJson(`/api/users/${id}`, User);
}
Ask the model to generate error types and Zod schemas first, then build callers that depend on them. This reduces backtracking.
React component with generic data and type-only imports
import type { ReactNode } from "react";
type Column<T> = {
key: keyof T;
header: ReactNode;
render?: (value: T[keyof T], row: T) => ReactNode;
};
type Props<T> = {
data: T[];
columns: Column<T>[];
};
export function DataTable<T extends Record<string, unknown>>({ data, columns }: Props<T>) {
return (
<table>
<thead>
<tr>{columns.map((c, i) => <th key={String(c.key) + i}>{c.header}</th>)}</tr>
</thead>
<tbody>
{data.map((row, i) => (
<tr key={i}>
{columns.map((c, j) => {
const value = row[c.key];
return <td key={j}>{c.render ? c.render(value, row) : String(value)}</td>;
})}
</tr>
))}
</tbody>
</table>
);
}
When requesting UI code, state the JSX runtime, routing system, and any restrictions like "no default exports" to align with your lint rules.
Pattern matching for exhaustive checks
import { match } from "ts-pattern";
type State =
| { type: "idle" }
| { type: "loading" }
| { type: "error"; message: string }
| { type: "success"; data: unknown };
export function renderState(s: State): string {
return match(s)
.with({ type: "idle" }, () => "Ready")
.with({ type: "loading" }, () => "Loading...")
.with({ type: "error" }, ({ message }) => `Error: ${message}`)
.with({ type: "success" }, () => "OK")
.exhaustive();
}
Ask Claude to make unions exhaustive using a matching library or a never check. This prevents future states from slipping through silently.
NestJS DTOs with class-validator
import { IsEmail, IsString, Length } from "class-validator";
export class CreateUserDto {
@IsEmail() email!: string;
@IsString() @Length(1, 50) name!: string;
}
Specify which validation system you use, then ensure handlers type against DTOs and return serializable shapes. Ask for E2E tests using supertest when generating controllers.
Refactor prompts that produce minimal diffs
Constraints:
- Provide a unified diff against src/utils/date.ts only
- No unrelated formatting changes
- Keep exports stable
Task:
Replace moment.js usage with date-fns, remove mutable date state, and add a testable parse function with a Zod guard.
Direct the model to produce minimal, review-friendly diffs. This keeps churn low and improves code review speed.
Tracking Your Progress
You can turn your private workflow into measurable, shareable insights. Code Card aggregates your AI-assisted coding patterns across sessions, visualizes contribution streaks, and surfaces language-specific stats that matter for TypeScript, like error density and type coverage improvements.
- Set up in 30 seconds with
npx code-card. The CLI guides you to connect your editor and provider logs locally, with privacy controls that filter secrets and code content. - Enable per-language labeling. Tag sessions as TypeScript-only or mixed JavaScript to compare outcomes and spot where strict typing helped.
- Track prompts as templates. Save your most effective TypeScript prompts, then measure acceptance rate and time-to-green by template.
- Instrument CI. Export build results and test summaries to correlate model usage with build stability.
If you are exploring broader AI workflows that span server and client, see AI Code Generation for Full-Stack Developers | Code Card. To stay motivated and consistent, connect your daily output with streak analytics via Coding Streaks for Full-Stack Developers | Code Card.
Sample workflow checklist
- Start with a prompt that includes your tsconfig constraints, framework, and data contracts.
- Ask for types first, then implementations that satisfy those types.
- Request unit tests or integration tests in the same turn, aligned to your runner.
- Run compile and tests locally, then ask the model to diagnose specific errors, including exact messages.
- Record outcomes to your profile so you can compare sessions and spot regressions in type quality.
Conclusion
TypeScript rewards up-front clarity. The strongest Claude Code results come from precise compiler settings, explicit schemas, and prompts that describe constraints instead of outcomes. Keep generics constrained, prefer unknown over any, use runtime validation, and demand exhaustive handling for unions. Capture metrics like acceptance rate, error density, and time-to-green so you can iterate on your process, not just your code. With the right setup and a light habit of measurement, your AI-assisted TypeScript development becomes faster, more type-safe, and easier to scale across teams.
FAQ
How should I structure prompts for TypeScript to reduce type errors?
Open with your compiler and framework constraints, then show the exact types you want to satisfy. Ask for types first, then implementations. Include a check step: "List potential ts errors under strict mode, exactOptionalPropertyTypes, noUncheckedIndexedAccess." This catches structural oversights before you paste code.
What libraries pair best with AI-assisted TypeScript?
Zod for runtime validation, ts-pattern for exhaustive unions, React Testing Library for UI tests, Vitest or Jest for unit testing, and Prisma for data access with strong types. These libraries reduce ambiguity, which makes the model's output more reliable.
How do I keep bundle size under control when generating React code?
Ask for import type for type-only imports, prefer named imports, avoid barrel re-exports in hot paths, and target modern output in tsconfig so your bundler can shake unused code. Specify your router and SSR strategy so the model avoids client-only APIs in server code.
What is a good acceptance rate for suggestions in a TypeScript codebase?
Expect 60 to 80 percent for straightforward implementations and 30 to 50 percent for refactors or novel integrations. If you are below those ranges, add stricter types, smaller tasks, and explicit libraries. If you are above them consistently, you might be under-specifying tests or skipping reviews.
How can I showcase my AI-assisted TypeScript work publicly without leaking code?
Publish aggregate metrics, streaks, token breakdowns, and anonymized language-level stats. Share the prompts you are comfortable with and scrub secrets. A high-level profile with contribution graphs gives proof of practice without revealing private code.