Developer Portfolios with Ruby | Code Card

Developer Portfolios for Ruby developers. Track your AI-assisted Ruby coding patterns and productivity.

Why Ruby portfolios benefit from AI-native signals

Ruby developers have long showcased craftsmanship through clear APIs, tidy gems, and expressive tests. Today, developer-portfolios that include AI-assisted coding signals add another layer of credibility. Recruiters and collaborators want to see not only what you built, but how you work with modern tools like Claude Code, how often you test, and how you improve velocity without sacrificing quality.

Public, shareable profiles that visualize your Ruby and Rails activity, contribution cadence, and AI-pairing habits make your work tangible. With Code Card, you can publish Claude Code stats, token breakdowns, and achievement badges alongside your repositories and talks, turning your day-to-day coding into a cohesive narrative of outcomes and learning.

Language-specific considerations for Ruby and Rails

Ruby's dynamic features and Rails's conventions influence how AI assistance shows up in your activity and how to present it in a portfolio. Keep these nuances in mind:

  • DSL-heavy ecosystems: Gems like RSpec, Capybara, and ActiveRecord use domain-specific languages. AI systems excel at scaffolding DSL syntax, but you should validate subtle semantics like lazy loading and scope chaining.
  • Metaprogramming caution: Ruby makes it easy to metaprogram. AI suggestions that lean on method_missing, define_method, or monkey-patching should be justified. In portfolios, explain why a dynamic approach was chosen over explicit composition.
  • Rails conventions over configuration: AI assistance often mirrors tutorial patterns. Highlight where you enforced Rails best practices like service objects, POROs, and strict parameters instead of rote scaffolds.
  • Testing culture: Ruby shines with RSpec and Minitest. Show the ratio of AI-generated tests to hand-written tests, plus your final coverage and time-to-green for CI. This demonstrates that AI accelerates quality, not just line counts.
  • Performance and memory: Ruby's runtime and garbage collector have tradeoffs. Document when you or the model optimized hot paths, reduced object allocations, or leveraged memoization safely.

Key metrics and benchmarks for Ruby developer-portfolios

Metrics help translate daily coding into evidence. Include a mix of AI usage, quality, and delivery signals tailored to Ruby and Rails:

  • Prompt-to-commit ratio: Number of LLM prompts or sessions that resulted in merged code. A healthy signal shows consistent conversion with small, reviewable diffs.
  • Completion edit distance: How much you modified AI-suggested code before merge. Lower is not always better. Aim for meaningful edits that clarify intent and align with idiomatic Ruby.
  • RuboCop and StandardRB offense delta: Count of offenses before and after a change. Track how AI suggestions fare against your enforced style and how you remediate violations.
  • RSpec/Minitest coverage delta: Coverage improvement per PR, plus time-to-green in CI. For Rails, include system and request test additions that validate routes and controllers.
  • Rails migration safety rate: Ratio of migrations that include backfills, null defaults, or safety wrappers to avoid downtime. Display checks for long-running migrations and lock time mitigation.
  • Query performance checks: Use logs or bullet gem to show N+1 query reductions. Include average query count per request for critical endpoints.
  • Token usage and day-parting: Visualize Claude Code token spend by hour to show when you pair program with an LLM and how it aligns with batch refactors versus focused bug fixes.

Benchmark guidance for mid-level Ruby engineers shipping Rails features weekly:

  • Prompt-to-commit ratio: 0.6 to 0.8 across feature work
  • Median completion edit distance: 15 to 35 percent of lines altered after generation
  • Coverage delta per PR: +1 to +5 percent for net-new features
  • N+1 regressions introduced: zero, with bullet gating in CI or staging
  • RuboCop offense delta: net negative on every feature branch

For deeper inspiration on quality signals, see Top Code Review Metrics Ideas for Enterprise Development.

Practical tips and Ruby code examples for showcasing AI-assisted work

Refactor AI output into idiomatic Ruby

Large models sometimes produce verbose or Java-like Ruby. Show your ability to refine it. Example refactor from an AI-suggested service object to a clean PORO with dependency injection:

# Before - AI suggestion with global access and inline validations
class SendWelcomeEmail
  def self.call(user_id)
    user = User.find(user_id)
    if user && user.email
      Mailer.deliver(:welcome, to: user.email, name: user.name)
      true
    else
      false
    end
  end
end

# After - PORO, explicit dependencies, predictable return object
Result = Struct.new(:ok?, :error)

class SendWelcomeEmail
  def initialize(mailer: Mailer)
    @mailer = mailer
  end

  def call(user)
    return Result.new(false, :missing_email) unless user&.email
    @mailer.deliver(:welcome, to: user.email, name: user.name)
    Result.new(true, nil)
  rescue StandardError => e
    Result.new(false, e.class.name.downcase.to_sym)
  end
end

In your portfolio, annotate that the first draft came from Claude Code, then show the reasoning behind your refactor: dependency injection for testability, explicit return type, and safer nil checks.

Rails scopes and query safety

Make it clear when you guided an LLM to avoid anti-patterns like broad includes or unscoped queries. Example of tightening a scope and documenting N+1 prevention:

# Tight, composable scope
class Order < ApplicationRecord
  scope :recent, -> { where("created_at >= ?", 30.days.ago) }
  scope :with_totals, -> { select("orders.*, SUM(line_items.total_cents) AS total_cents").joins(:line_items).group("orders.id") }
end

# Controller usage
def index
  orders = Order.recent.with_totals.includes(:customer) # includes is narrowly scoped
  render json: orders
end

Pair this with a bullet screenshot or log snippet that proves N+1 was addressed.

RSpec from prompts to robust tests

LLMs can draft RSpec examples quickly. Show how you harden them using factories and clear expectations:

# AI-drafted spec tightened with FactoryBot and explicit matcher
RSpec.describe SendWelcomeEmail do
  let(:mailer) { instance_double(Mailer, deliver: true) }
  let(:user)   { create(:user, email: "dev@example.com") }

  subject(:service) { described_class.new(mailer: mailer) }

  it "delivers a welcome email and returns ok" do
    result = service.call(user)
    expect(mailer).to have_received(:deliver).with(:welcome, to: user.email, name: user.name)
    expect(result).to have_attributes(ok?: true, error: nil)
  end

  it "fails safely when email missing" do
    user.update!(email: nil)
    result = service.call(user)
    expect(result).to have_attributes(ok?: false, error: :missing_email)
  end
end

Compute completion edit distance with Ruby

Demonstrate rigor by calculating how much you changed AI suggestions before merge. Include a small snippet to compute Levenshtein distance and a normalized edit ratio:

# Lightweight Levenshtein implementation and normalized ratio
def levenshtein(a, b)
  a, b = a.to_s, b.to_s
  m, n = a.length, b.length
  return n if m.zero?
  return m if n.zero?

  d = Array.new(m + 1) { Array.new(n + 1) }
  (0..m).each { |i| d[i][0] = i }
  (0..n).each { |j| d[0][j] = j }

  (1..m).each do |i|
    (1..n).each do |j|
      cost = a[i - 1] == b[j - 1] ? 0 : 1
      d[i][j] = [
        d[i - 1][j] + 1,
        d[i][j - 1] + 1,
        d[i - 1][j - 1] + cost
      ].min
    end
  end
  d[m][n]
end

def edit_ratio(original, final)
  dist = levenshtein(original, final)
  max_len = [original.length, final.length].max
  return 0.0 if max_len.zero?
  dist.to_f / max_len
end

Store this per-change metric and graph it over time to show a trend toward cleaner first-pass generations or more surgical edits.

Micro-benchmark before and after

Show that you did not just accept AI output - you measured. Ruby tools like Benchmark and benchmark-ips help quantify improvements:

require "benchmark/ips"

def naive_sum(arr)
  total = 0
  arr.each { |n| total += n }
  total
end

def fast_sum(arr)
  arr.sum
end

arr = Array.new(100_000) { rand(1000) }

Benchmark.ips do |x|
  x.report("naive_sum") { naive_sum(arr) }
  x.report("fast_sum")  { fast_sum(arr) }
  x.compare!
end

Include the output in your portfolio to turn performance claims into evidence.

Tracking your progress and publishing a shareable profile

A disciplined workflow makes your Ruby achievements easy to verify and pleasant to browse. Here is a simple path from daily work to public profile:

  1. Tag AI-assisted commits: Add a lightweight convention in commit messages so you can compute ratios later. Example: [ai] refactor: extract service object for welcome emails. Include tokens or session IDs if your IDE exposes them.
  2. Capture tokens locally: If your editor integrates Claude Code, log session metadata to a JSON file in .gitignore so you can publish aggregates without leaking content. Example script:
require "json"
require "time"

LOG = File.expand_path("~/.ai/claude_code_log.jsonl")

def log_session(model:, tokens:, files:)
  event = {
    at: Time.now.utc.iso8601,
    model: model,
    tokens: tokens,
    files: files
  }
  File.open(LOG, "a") { |f| f.puts(event.to_json) }
end

# Example usage:
# log_session(model: "claude-code", tokens: 1842, files: ["app/services/send_welcome_email.rb"])
  1. Publish in seconds: Run npx code-card to set up and push your stats. The CLI aggregates activity and renders a profile that feels like a contribution graph for AI-assisted coding.
  2. Automate in CI: Add a scheduled job so your profile stays fresh. For GitHub Actions:
name: Update Developer Portfolio Stats
on:
  schedule:
    - cron: "0 3 * * *"

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm install -g code-card
      - run: code-card publish --ci

In your README or personal site, embed graphs via the generated badges. Code Card turns these aggregates into visuals that help hiring managers and collaborators understand your patterns at a glance.

For portfolio audiences like recruiters, pair your graphs with narrative context, for example how AI accelerated onboarding to a legacy Rails monolith or how you maintained strict RuboCop compliance while increasing delivery throughput. For additional framing ideas, read Top Developer Profiles Ideas for Technical Recruiting and Top Coding Productivity Ideas for Startup Engineering.

Finally, connect your repository or organization and let Code Card sync on a cadence that matches your release cycle. You can keep sensitive code private while still showcasing aggregate metrics, achievements, and trends.

Conclusion

Ruby and Rails excel at readable code and fast iteration. When you complement that with transparent AI-pairing metrics, your developer-portfolios shift from screenshots and claims to trustworthy, inspectable signals. Use small, frequent commits, clear tests, and measurable improvements. Summarize the story in a profile powered by Code Card, and give teams a crisp view of how you build, test, and ship.

FAQ

How should I present AI-assisted code without revealing proprietary details?

Share aggregates and redacted snippets. Keep raw session logs in a private location, publish tokens and edit ratios, and show de-identified diffs that illustrate techniques like extraction, scoping, and test hardening. Focus on outcomes like coverage, latency, or N+1 reductions.

What Ruby frameworks and libraries should I highlight?

For web apps, include Rails, Hotwire, and Sidekiq or GoodJob for background jobs. For APIs, showcase Rails or Grape. For testing, show RSpec or Minitest with FactoryBot and Faker. For quality, list RuboCop, StandardRB, Brakeman, and bullet. For performance, include benchmark-ips and rack-mini-profiler.

How do AI assistance patterns differ for Ruby compared to other languages?

Ruby's DSLs and metaprogramming encourage concise code, so LLM output benefits from your refactoring toward conventional patterns. You will often convert generic suggestions into idiomatic blocks, scopes, and POROs. Emphasize test-first workflows and guardrails like strict parameters and safe migrations.

What metrics matter most to hiring managers evaluating a Ruby portfolio?

They care about consistent delivery, safe migrations, clean tests, and readable code. Show coverage deltas, CI time-to-green, RuboCop offense reductions, and concrete before-after performance. Include AI usage metrics like prompt-to-commit ratio and completion edit distance to demonstrate responsible tooling.

Can I use these portfolio practices for non-Rails Ruby or topic language projects?

Yes. The same approach works for gems, CLIs, and services. Swap Rails-specific metrics for library quality signals like semantic versioning discipline, documented APIs, and benchmarked hot paths. The underlying idea is the same - combine coding achievements, disciplined tests, and transparent AI usage to tell a reliable story.

Ready to see your stats?

Create your free Code Card profile and share your AI coding journey.

Get Started Free