Developer Portfolios with JavaScript | Code Card

Developer Portfolios for JavaScript developers. Track your AI-assisted JavaScript coding patterns and productivity.

Introduction

JavaScript portfolios that stand out tell a story about how you solve problems across the stack, not just how many repositories you have. Modern developer portfolios increasingly include evidence of AI-assisted coding, from the way you prompt to how your diffs land in production. If you work in Node.js, React, Next.js, Vue, Svelte, or TypeScript, showcasing real, measurable outcomes is what convinces teams and clients you can ship.

Publicly surfacing your AI-assisted JavaScript patterns alongside traditional projects is now a credible differentiator. With Code Card you can publish your Claude Code stats as a shareable profile that pairs contribution graphs, token breakdowns, and achievement badges with the commits and PRs that matter. Treat it like a lightweight, data-backed layer on top of your GitHub that shows how your prompts translate into working code.

Language-Specific Considerations for JavaScript Portfolios

JavaScript is a rapid feedback language, so your AI usage will look different than in systems languages. That is a strength. The topic language bias emphasizes iteration speed, testing in the browser, and quick serverless deploys. Use these aspects to position your impact with clarity.

  • Front-end frameworks - Show how AI helps you prototype in React or Next.js without sacrificing maintainability. Examples include generating prop types, scaffolded hooks, and test files. For Vue and Svelte, focus on component boundaries, stores, and SSR pitfalls you caught early.
  • TypeScript adoption - Highlight where AI accelerated type migrations, created safe union types, or drafted utility types that reduced runtime errors. Emphasize strictness settings and how inference reduced bugs.
  • Node and API work - Showcase endpoint scaffolding, validation layers, and performance tuning. AI is especially good at boilerplate like schema validation and test doubles, but your portfolio should highlight how you refactored the generated code for readability and security.
  • Tooling integration - Document how AI suggestions align with ESLint, Prettier, Vite, or webpack rules. If you use Vitest or Jest, note how many generated tests you kept versus rewrote, and why.
  • Security and correctness - For JavaScript, small oversights can leak sensitive data or introduce XSS. Demonstrate explicit reviews of AI output that removed unsafe string interpolation or implemented content security policies.

Key Metrics and Benchmarks for JavaScript Developer Portfolios

Metrics make developer-portfolios credible. Use numbers that connect prompts to production outcomes and fit the way JavaScript apps evolve.

  • Prompt-to-diff ratio - Prompts per 100 lines changed. A healthy range for front-end features is 2-6 prompts per 100 LOC during prototyping, then 1-3 during refinement. Lower is not always better - clarity of prompts matters more than sheer count.
  • Completion acceptance rate - Percentage of AI suggestions accepted after human review. Target 35-60 percent for UI scaffolds, 20-40 percent for complex data or TypeScript-heavy areas. Explain lower rates when you prefer manual refactors.
  • Test assistance coverage - Percentage of test files seeded by AI that survived code review. For Jest or Vitest, 50-70 percent retention for simple components is realistic, 20-40 percent for business logic.
  • Time-to-green - Average minutes from first prompt to passing tests. For small UI components, 10-25 minutes. For new API endpoints with validation and basic integration tests, 25-60 minutes.
  • Defect escape rate - Percentage of AI-generated lines later modified to fix bugs. Keep this under 10 percent for mature codebases and document remediation patterns.
  • Bundle and perf impact - Net change in bundle size and key vitals like TTI or LCP after AI-assisted changes. For example, +3 KB gzip with TTI unchanged is acceptable, +25 KB with degraded LCP is not.
  • Docs and maintainability - ESLint warnings, TypeScript errors, and cyclomatic complexity deltas before and after AI edits. Show trend lines that move toward fewer warnings over time.

Profiles that include contribution graphs and token breakdowns give useful context for the metrics above. Use them to group work by domain, for example state management, API integration, performance, or testing. Aggregate by week so hiring managers can see cadence rather than one-off spikes, then annotate meaningful streaks and achievements with concise explanations tied to released features. One concise mention of platform features is enough to orient readers, so resist repeating it.

Practical Tips and Code Examples

Use concrete before-and-after examples to demonstrate how you guided AI, what you kept, and what you rewrote.

React component factoring with memoization

// Before - frequent re-renders from inline handlers
function ProductList({ items, onSelect }) {
  return (
    <ul>
      {items.map(item => (
        <li key={item.id} => onSelect(item.id)}>
          {item.name}
        </li>
      ))}
    </ul>
  );
}

// After - AI suggested useCallback, you refined dependencies
import { useCallback, useMemo } from 'react';

function ProductList({ items, onSelect }) {
  const ids = useMemo(() => items.map(i => i.id), [items]);

  const handleClick = useCallback(
    (id) => onSelect(id),
    [onSelect]
  );

  return (
    <ul>
      {items.map((item, idx) => (
        <li key={ids[idx]} => handleClick(item.id)}>
          {item.name}
        </li>
      ))}
    </ul>
  );
}

Portfolio note: explain that you validated re-render reductions with React DevTools and kept the handler extraction, while rejecting an unnecessary useMemo for static labels.

TypeScript-safe API handler with runtime validation

import express from 'express';
import { z } from 'zod';

const app = express();
app.use(express.json());

const CreateTodo = z.object({
  title: z.string().min(1),
  due: z.string().datetime().optional()
});

type CreateTodoInput = z.infer<typeof CreateTodo>;

app.post('/api/todos', (req, res) => {
  const parsed = CreateTodo.safeParse(req.body);
  if (!parsed.success) return res.status(400).json(parsed.error.flatten());
  const todo: CreateTodoInput = parsed.data;
  // ... insert into DB
  res.status(201).json({ ok: true, todo });
});

Portfolio note: show that an AI draft handled zod scaffolding, then you tightened the schema, added safeParse, and introduced the typed alias to document request contracts.

Next.js server actions with caching and error boundaries

// app/actions.ts
'use server';

import 'server-only';
import { cache } from 'react';

export const fetchUser = cache(async (id: string) => {
  const res = await fetch(`${process.env.API}/users/${id}`, { cache: 'no-store' });
  if (!res.ok) throw new Error('Failed to load');
  return res.json();
});

// app/page.tsx
import { Suspense } from 'react';
import { fetchUser } from './actions';

export default async function Page() {
  const user = await fetchUser('123');
  return (
    <Suspense fallback={<p>Loading...</p>}>
      <pre>{JSON.stringify(user, null, 2)}</pre>
    </Suspense>
  );
}

Portfolio note: explain that AI proposed server actions, but you added React cache and server-only to prevent accidental client bundling.

Testing generated logic with Vitest

// src/math.ts
export function clamp(n: number, min: number, max: number) {
  return Math.min(Math.max(n, min), max);
}

// tests/math.test.ts
import { describe, it, expect } from 'vitest';
import { clamp } from '../src/math';

describe('clamp', () => {
  it('clamps within range', () => {
    expect(clamp(5, 0, 10)).toBe(5);
  });
  it('clamps low', () => {
    expect(clamp(-1, 0, 10)).toBe(0);
  });
  it('clamps high', () => {
    expect(clamp(99, 0, 10)).toBe(10);
  });
});

Portfolio note: call out how AI drafted the first test, then you added edge cases and consistent naming. Include a short metric, for example 60 percent of the test boilerplate retained with zero lint warnings.

Micro-benchmarking a hot path

// Measure a critical utility to validate a suggested refactor
import { performance } from 'node:perf_hooks';
import { clamp } from './math';

const samples = 100000;
const start = performance.now();
for (let i = 0; i < samples; i++) {
  clamp(i % 100, 0, 50);
}
const ms = performance.now() - start;
console.log(`clamp x${samples} took ${ms.toFixed(2)}ms`);

Portfolio note: if AI proposed a micro-optimization, keep the numbers. Many optimizations do not move the needle in JavaScript once JIT warms up. Data beats assumptions.

Configuring lint and format alignment

// .eslintrc.cjs
module.exports = {
  extends: ['next/core-web-vitals', 'plugin:@typescript-eslint/recommended'],
  parser: '@typescript-eslint/parser',
  plugins: ['@typescript-eslint'],
  rules: {
    '@typescript-eslint/explicit-module-boundary-types': 'off',
    'no-console': ['warn', { allow: ['warn', 'error'] }]
  }
};

Portfolio note: mention that AI drafted the config, then you pruned deprecated rules and aligned console policy with your team's logging standard.

Tracking Your Progress

To make your JavaScript portfolio persuasive, track how prompts become reliable features over weeks, not just a handful of screenshots. Two things matter most: consistent cadence and context-rich metrics.

  • Cadence - Use coding streaks to show sustained practice. A 10- to 14-day run where you ship small, reviewed changes beats a one-off week of spiky usage. See Coding Streaks for Full-Stack Developers | Code Card for ideas on structuring achievable streaks without burnout.
  • Context - Group tokens and suggestions by feature area, for example auth, caching, or UI state. Pair token spikes with PR links and short rationales, for example "experimented with three state management approaches, chose Zustand for bundle size and simplicity."
  • Quality gates - Record lint error delta, TypeScript error delta, and unit test pass rate per week. A good look is rising complexity only when features justify it, plus stable or decreasing errors.
  • Outcomes - When your work lands in production, log performance and UX impacts, for example -12 percent LCP on product grid after image optimization, or +8 points on CWV for checkout.

If you want a public, data-rich profile that pairs contribution graphs with token breakdowns and achievement badges, publish through Code Card and link it from your README. Treat it like a dynamic appendix for your resume that readers can verify against live repositories.

For strategy on using generation responsibly across the stack, this guide dovetails with AI Code Generation for Full-Stack Developers | Code Card. For improving your prompts, especially when working on open source, see Prompt Engineering for Open Source Contributors | Code Card.

Conclusion

JavaScript developer portfolios move fastest when they combine practical code with measurable impact. Be explicit about how AI helped, what you changed, and how your choices improved maintainability or performance. Keep your metrics honest, your examples small and focused, and your cadence steady. If you choose to make your stats public, a concise profile through Code Card can complement your GitHub by adding verified AI usage patterns and outcomes.

FAQ

How do I show AI helped without looking like I outsourced my work?

Lead with outcomes and reviews. Annotate PRs with prompt intent, the suggestion you accepted, and the manual fixes you applied. Include tests and perf numbers that validate the change. A short note like "AI drafted input validation, I added zod schemas and safeParse for runtime safety" signals ownership and craft.

What JavaScript areas benefit most from AI suggestions?

Boilerplate and scaffolding: component shells, routing, TypeScript types and simple tests. Regexes, data mapping utilities, and repetitive API handlers are good candidates. High value areas still require human design: state management choices, concurrency boundaries, accessibility, and security. Use suggestions as a starting point, then refactor for clarity.

What metrics should I avoid?

Avoid vanity counts like raw tokens consumed without outcomes, or lines generated without tests. Do not present acceptance rates without noting review discipline. Prefer metrics that attach to shipped features, such as test retention, time-to-green, and bundle impact. Be transparent about defect escapes and what you did to reduce them.

How do I talk about TypeScript in my portfolio?

Show concrete wins: stricter tsconfig settings that prevented a class of runtime errors, a utility type that removed optional chaining in 30 percent of call sites, or a migration plan with staged strictness. Include a snippet where types guided refactoring and a metric like zero TS errors on CI for the last 14 days.

Is it worth including cross-language comparisons?

If you also work in Ruby or C++, comparisons can highlight how your prompting strategy adapts to different ecosystems. For example, JavaScript prompts may focus on SSR boundary placement or hydration, while C++ prompts may focus on memory safety and build flags. If that resonates, you can explore related guides such as Developer Profiles with Ruby | Code Card or Developer Profiles with C++ | Code Card to tune your approach across stacks.

Ready to see your stats?

Create your free Code Card profile and share your AI coding journey.

Get Started Free